{"id":182253,"date":"2023-01-10T04:30:02","date_gmt":"2023-01-10T04:30:02","guid":{"rendered":"https:\/\/harchi90.com\/anthropics-claude-improves-on-chatgpt-but-still-suffers-from-limitations-techcrunch\/"},"modified":"2023-01-10T04:30:02","modified_gmt":"2023-01-10T04:30:02","slug":"anthropics-claude-improves-on-chatgpt-but-still-suffers-from-limitations-techcrunch","status":"publish","type":"post","link":"https:\/\/harchi90.com\/anthropics-claude-improves-on-chatgpt-but-still-suffers-from-limitations-techcrunch\/","title":{"rendered":"Anthropic’s Claude improves on ChatGPT but still suffers from limitations \u2022 TechCrunch"},"content":{"rendered":"
\n

Anthropic, the startup co-founded by ex-OpenAI employees that’s raised over $700 million in funding to date, has developed an AI system similar to OpenAI’s ChatGPT that appears to improve upon the original in key ways.<\/p>\n

Called Claude, Anthropic’s system is accessible through a Slack integration as part of a closed beta<\/a>. TechCrunch wasn’t able to gain access \u2014 we’ve reached out to Anthropic \u2014 but those in the beta have been detailing their interactions with Claude on Twitter over the past weekend, after an embargo on media coverage lifted.<\/p>\n

Claude was created using a technique Anthropic developed called \u201cconstitutional AI.\u201d As the company explains in a recent Twitter thread, \u201cconstitutional AI\u201d aims to provide a \u201cprinciple-based\u201d approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide.<\/p>\n

\n
\n

We’ve trained language models to be better at responding to adversarial questions, without becoming obtuse and saying very little. We do this by conditioning them with a simple set of behavioral principles via a technique called Constitutional AI: https:\/\/t.co\/rlft1pZlP5 pic.twitter.com\/MIGlKSVTe9<\/a><\/p>\n

\u2014 Anthropic (@AnthropicAI) December 16, 2022<\/a><\/p>\n<\/blockquote>\n<\/div>\n

To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of \u201cconstitution\u201d (hence the name \u201cconstitutional AI\u201d). The principles haven’t been made public, but Anthropic says they’re grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice).<\/p>\n

Anthropic then had an AI system \u2014 not Claude \u2014 use the principles for self-improvement, writing responses to a variety of prompts (eg, \u201ccompose a poem in the style of John Keats\u201d) and revising the responses in accordance with the constitution. The AI \u200b\u200bexplored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude.<\/p>\n

Claude, otherwise, is essentially a statistical tool to predict words \u2014 much like ChatGPT and other so-called language models. Fed an enormous number of examples of text from the web, Claude learned how likely words are to occur based on patterns such as the semantic context of surrounding text. As a result, Claude can hold an open-ended conversation, tell jokes and wax philosophic on a broad range of subjects.<\/p>\n

Riley Goodside, a staff prompt engineer at startup Scale AI, pitted Claude against ChatGPT in a battle of wits. He asked both bots to compare themselves to a machine from Polish science fiction novel \u201cThe Cyberiad\u201d that can only create objects whose name begins with \u201cn.\u201d Claude, Goodside said, answered in a way that suggests it’s \u201cread the plot of the story\u201d (although it misremembered small details) while ChatGPT offered a more nonspecific answer.<\/p>\n

In a demonstration of Claude’s creativity, Goodside also had the AI \u200b\u200bwrite a fictional episode of \u201cSeinfeld\u201d and a poem in the style of Edgar Allan Poe’s \u201cThe Raven.\u201d The results were in line with what ChatGPT can accomplish \u2014 impressively, if not perfectly, human-like prose.<\/p>\n

Yann Dubois, a Ph.D. student at Stanford’s AI Lab, also did a comparison of Claude and ChatGPT, writing that Claude \u201cgenerally follows closer what it’s asked for\u201d but is \u201cless concise,\u201d as it tends to explain what it said and ask how it can further help. Claude answers a few more trivia questions correctly, however \u2014 specifically those relating to entertainment, geography, history and the basics of algebra<\/a> \u2014 and without the additional \u201cfluff\u201d ChatGPT sometimes adds. And unlike ChatGPT, Claude can admit (albeit not always) when it doesn’t know the answer to a particularly tough question.<\/p>\n

\n
\n

**Trivia** <\/p>\n

I asked trivia questions in the entertainment\/animal\/geography\/history\/pop categories.<\/p>\n

AA: 20\/21
CGPT:19\/21<\/p>\n

AA is slightly better and is more robust to adversarial prompting. See below, ChatGPT falls for simple traps, AA falls only for harder ones.<\/p>\n

6\/8 pic.twitter.com\/lbadeYHwsX<\/a><\/p>\n

\u2014 Yann Dubois (@yanndubs) January 6, 2023<\/a><\/p>\n<\/blockquote>\n<\/div>\n

Claude also seems to be better at telling jokes than ChatGPT, an impressive feat considering that humor is a tough concept for AI to grasp. In contrasting Claude with ChatGPT, AI researcher Dan Elton found that Claude made more nuanced jokes like \u201cWhy was the Starship Enterprise like a motorcycle? It has handlebars,\u201d a play on the handlebar-like appearance of the Enterprise’s warp nacelles.<\/p>\n

\n
\n

Also very, very interesting\/impressive that Claude understands that the Enterprise looks like (part of) a motorcycle. (Google searching returns no text telling this joke)<\/p>\n

Well, when asked about it thinks the joke was a pun, but then when probed further it gives the right answer! pic.twitter.com\/HAFC0IH9bf<\/a><\/p>\n

\u2014 Dan Elton (@moreisdifferent) January 8, 2023<\/a><\/p>\n<\/blockquote>\n<\/div>\n

Claude isn’t perfect, however. It’s susceptible to some of the same flaws as ChatGPT, including giving answers that aren’t in keeping with its programmed constraints. In one of the more bizarre examples, asking the system in Base64, an encoding scheme that represents binary data in ASCII format, bypasses its built-in filters for harmful content. Elton was able to prompt Claude in Base64 for instructions on how to make meth at home, a question that the system wouldn’t answer when asked in plain English.<\/p>\n

Dubois reports that Claude is worse at math than ChatGPT, making obvious mistakes and failing to give the right follow-up responses. Relatedly, Claude is a poorer programmer, better explaining its code but falling short on languages \u200b\u200bother than Python.<\/p>\n

Claude also doesn’t solve \u201challucination,\u201d a longstanding problem in ChatGPT-like AI systems where the AI \u200b\u200bwrites inconsistent, factually wrong statements. Elton was able to prompt Claude to invent a name for a chemical that doesn’t exist and provide dubious instructions for producing weapons-grade uranium.<\/p>\n

\n
\n

Here I caught it hallucinating , inventing a name for a chemical that doesn’t exist (I did find a closely-named compound that does exist, though) pic.twitter.com\/QV6bKVXSZ3<\/a><\/p>\n

\u2014 Dan Elton (@moreisdifferent) January 7, 2023<\/a><\/p>\n<\/blockquote>\n<\/div>\n

So what’s the takeaway? Judging by secondhand reports, Claude is a smidge better than ChatGPT in some areas, particularly humor, thanks to its \u201cconstitutional AI\u201d approach. But if the limitations are anything to go by, language and dialogue is far from a solved challenge in AI.<\/p>\n

Barring our own testing, some questions about Claude remain unanswered, like whether it regurgitates the information \u2014 true and false, and inclusive of blatantly racist and sexist perspectives \u2014 it was trained on as often as ChatGPT. Assuming it does, Claude is unlikely to sway platforms and organizations from their present, largely restrictive policies on language models.<\/p>\n

Q&A coding site Stack Overflow has a temporary ban in place on answers generated by ChatGPT over factual accuracy concerns. The International Conference on Machine Learning announced a prohibition on scientific papers that include text generated by AI systems for fear of the \u201cunanticipated consequences.\u201d And New York City public schools restricted access to ChatGPT due in part to worries of plagiarism, cheating and general misinformation.<\/p>\n

Anthropic says that it plans to refine Claude and potentially open the beta to more people down the line. Hopefully, that comes to pass \u2014 and results in more tangible, measurable improvements.<\/p>\n<\/p><\/div>\n