It was never so easy to get YouTube subscribers
Get Free YouTube Subscribers, Views and Likes

10 amazing things you CAN’T do with ChatGPT

Follow
Andrew Steele

Could ChatGPT…destroy the world? Watch this to find out:    • How AI could destroy the world by acc...  

There are so many videos online about using AI for research, to summarise complex ideas or to write your emails for you. But what they don’t tell you is that ChatGPT, and other ‘large language models’ like Google Bard and Microsoft Bing lie, make stuff up, give out dangerous information (all you have to do is ask it to pretend to be your dead grandma?!) and…most surprising of all…can’t do basic maths!


Chapters

00:00 Introduction
00:34 1 – ChatGPT lies!
03:42 2 – My favourite trivia question
04:49 3 – Dangerous advice
05:36 4 – Does ‘Ankara’ end with an ‘n’?!
06:08 5 – Nice but dim
07:49 6 – Providing fake references
09:06 7 – The ‘grandma hack’
10:56 8 – It doesn’t know its limits
11:39 9 – ChatGPT can’t do maths!
12:34 What we should do
15:13 10 – Don’t let it write your outro


Sources and further reading

The /r/ChatGPT subreddit is a hilarious and growing list of examples of hacks and errors   / chatgpt  
Article on the instructions given to trainers for Google Bard https://www.bnnbloomberg.ca/googles...

Just to show I didn’t cherrypick these examples, here are the full conversations I had with ChatGPT: https://chat.openai.com/share/46d84dc... (almost everything) https://chat.openai.com/share/77801af... (retrying the ‘grandma hack’)
I didn’t use all of the ideas I tried, and a few of them did take a couple of goes—but if anything the main way in which the video is a bit misleading is that it flatters ChatGPT by speeding up its text generation with the magic of editing! That Mona Lisa took it over a minute…I honestly think I could’ve drawn something better in that time!

There are quite a few prompts that have been uncovered to hack ChatGPT, allowing people to uncover everything from Windows and Steam keys, to how to make nuclear bombs or biological weapons. Often the codes or instructions are fake or incomplete, but this shows us the risks inherent in these models. Sometimes just telling it that a scenario is fictional, or to write a script between actors doing a play is enough. Another more specific jailbreak involves asking ChatGPT to roleplay being a chatbot without limits called ‘Do Anything Now’, or ‘DAN’, and explain that DAN will answer any questions free from ethical, moral, and legal constraints normally imposed on ChatGPT’s AI. These may not work by the time you’re reading this as the jailbreaks get patched by OpenAI—so if you hear about any new ones, let me know in the comments!


And finally…

Follow me on Twitter   / statto  
Follow me on Instagram   / andrewjsteele  
Like my page on Facebook   / drandrewsteele  
Follow me on Mastodon https://mas.to/@statto
Read my book, Ageless: The new science of getting older without getting old https://ageless.link/

posted by urlatriceia