Anthropic's Claude AI Can Respond With Charts, Diagrams, and Other Visualschat
- Reference: 0180960306
- News link: https://features.slashdot.org/story/26/03/12/1710246/anthropics-claude-ai-can-respond-with-charts-diagrams-and-other-visualschat
- Source link:
> As an example, Anthropic says a conversation about the periodic table could lead Claude to generate a visualization of it, featuring interactive elements that let you click inside the table for more information. Another example shows how Claude can generate a visual related to a question about how weight travels through a building. Though Claude will automatically determine whether it should generate a visualization in your chat, Anthropic notes that you can also ask the chatbot to generate a diagram, table, or chart directly. [...]
>
> Anthropic already allows you to create charts, documents, tools, and apps through Claude's "artifacts" feature, which opens in a side panel where you can interact, share, and download the AI-generated creation. But, as noted by Anthropic, artifacts are persistent, while the visualizations created within Claude's conversations will change or disappear as the conversation progresses. You can also ask Claude to make changes to the visualizations it creates.
[1] https://www.theverge.com/ai-artificial-intelligence/893625/anthropic-claude-ai-charts-diagrams
How long until (Score:2)
How long until it can spew bullshit using full PowerPoint presentations. I ask because I'm starting an initiative at work to replace management with AI and being able to put unrealistic hallucinations regarding scheduling, etc. in to PowerPoint form is currently the only thing holding the project back.
Re: (Score:2)
I've seen a presentation by a vibe coder where not only was there a presentation generated from his topic structure by AI, but also there was a speaking AI in the Zoom meeting. The best bit was the way that the AI was much more cynical about the risks and problems of AI than any of the other presenters.
The only problem is that the presentation was quite good and informative. I guess we're probably at least some months if not years away from being able to create truly information free presentations with AI.
Re: How long until (Score:2)
I think this does not deserve enough attention. I believe management jobs are a good candidate to be replaced by Ai. The way LLMs talk actually reminds me of managers. I guess it is the statistical nature of an LLM. Saying what is expected. Wrap everything in corporate speak should be dead simple. Make my plannings that sound good but do not work? Ai may actually produce something more useful. Seriously!
Re: (Score:2)
> The only problem is that the presentation was quite good and informative. I guess we're probably at least some months if not years away from being able to create truly information free presentations with AI.
This addresses two issues at once. One is that, on average, management isn't even as useful as an AI chatbot. Two, AI cannot usually give precise accurate information without hallucinations. Something that isn't actually as necessary for high-level guidance. But like you mention - having a high level view to see risks and problems is getting harder with the higher complexity that technology brings.
For now, AI is going to be better at eliminating middle and upper management and leave us with just people a
Visual spam and hallucinations are great. (Score:2)
They beat other hallucinations by a large margin.
You put the lyin' in the coconut (Score:4, Insightful)
Anthropic's Claude AI Can Respond With Charts, Diagrams, and Other Visualschat
Clearly this is a typo. It's supposed to say "Visuals Shat".
Not impressive (Score:2)
AI can draw. Making charts etc. seems like an obvious sub-ability.
What would impress me is when you ask it a math equation, instead of attempting to do the math itself, it opens up the calculator app and uses it to solve the equation. When asked to translate, it opens up a translation app and uses it. When asked for the current news, it goes to a news website. When asked for current stock prices, it goes to a finance site and gets the information from there.
We do not want just the ability to answer ques
Re: (Score:2)
The AI already do math outside the LLM. Ask ChatGPT what's the square root of 12345, it replies immediately with the right answer rather than trying to figure it out.
Re: (Score:2)
For the most part, it can do this. The problem is that the more external capabilities you give it, the more likely it is to imagine an API that doesn't exist. So instead of getting you the answer, it will post pseudocode for a non-existent function call directly into the chat.
So what? (Score:2)
So can my home LLM and it costs nothing. What's the big deal here?
Re: (Score:2)
People are slowly catching on with the capabilities of this stuff.
Only a matter of time before the public strings us up. We've only been safe because of the broad ignorance people have about what this stuff is going to do to every industry.
Re: (Score:2)
Well, your home LLM costs power and compute time.
A 2B pencil and some paper would cost you less and you'd have just as good a graph. Probably better.
Oh boy! (Score:2)
> As an example, Anthropic says a conversation about the periodic table could lead Claude to generate a visualization of it
Generate a "visualization"? Is that what they're calling hallucinations now? I mean, if AI can make up entire legal precedents, I'm sure it can say "hold my beer" while it adds a bogus element or two to the periodic table.
Maybe they should have picked a less obvious example, like something from the social sciences where there's already so much BS that another clinker might go unnoticed.
We have charts and graphs to back us up (Score:1)
So fuck off
Not an improvement. (Score:2)
The problem is that this only makes an unthinking statistical word selection machine seem more credible. Considering how many people already think AI like this are actually thinking (intelligent), I see this as being a step in the wrong direction.
Re: (Score:2)
Agreed. It is obvious that "AI researchers" and "AI reviewers" aren't remotely interested in posing challenging problems to AI. They softball because they know that keeping it safe is the only way they'll get anything out at all. But because they do so, people have become convinced that AI is usable, reliable, and trustworthy.
The gap between the promise and the reality is, I would say, probably in the order of a couple of centuries of work, and mostly in directions that LLMs can't go.
Re: (Score:2)
> The problem is that this only makes an unthinking statistical word selection machine seem more credible. Considering how many people already think AI like this are actually thinking (intelligent), I see this as being a step in the wrong direction.
They're not trying to improve it. They're trying to get it to appeal to managerial types that like charts and graphs over actual data because pretty colors are pretty. If they can get the AI responding to any question with a pretty chart or graph, they'll have sold it to many more decision makers just by having charts and graphs. It doesn't really matter if they are presenting accurate data. That won't matter until after the sale when the decision makers are asking why their employees are telling them that