Yes I realise I need to get back to the maps stuff, I will sometime soon. But this is important given the nonsense I keep seeing.
There's been a lot of nonsense talked about ChatGPT and the other "AI" products. They're not AI, they're just more sophisticated versions of this guy
Your phone has offered autocomplete for your typing for ages. Gmail and O365 mail have offered a one click "yeah good" or "this sucks" responses for a long time. This Chat GPT stuff isn't far from this, but fancier and with VC fluff and nonsense and money. Lots and lots of money.
The twitter prompt tweets folks post of the form “type people think I am" and hit autocomplete and post it? This is the same thing, only the ChatGPT folks have burned billions of dollars to make a better autocomplete. You offer a prompt and it autocompletes an answer. The only difference is you don't have to start typing, it uses what you have suggested as a prompt to start doing the autocomplete. That's it. That's entirely all of this nonsense you've heard about is.
ChatGPT has no idea if the answer is correct. It's just spicy autocomplete. It literally cannot understand if the text it sends you is correct. It's just going statistically if you use that word the next word is likely to be this, but with enormous amounts of effort to suck down words and pick a lot of words in advance and they are extending how many words in advance it can do (there are interesting copyright issues there related to the sources of the training data) so that when you give it a prompt as a seed it can start autocompleting. It's very good at this but it's also absolutely untrustworthy. It just autocompletes, and makes stuff up based on the words it's already put down and keeps going.
Worse yet, the various Large Language Model systems cannot have the ability to detect nuance.
Many years ago I worked on a spam detection system. A huge insight we had was that sometimes the system would be absolutely certain this email is spam, or this email is good. And sometimes the system would go this is a thousandth of a percent likely to be spam, and a ten thousandths of a percent to be good, because it's this email is not like anything we have seen before. Purely by maths you would go it's spam, right? It is one thousandth of a percent likely to be good, and only one ten thousandth of a percent to be good. But actually the result should be 'fucked if I know' and punt it to a human. I tried to write it up here.
The ChatGPT type systems cannot have this level of going "idk man, its complicated". It's unclear if a LLM based system could ever deliver that sort of result, I don't think I can see how it could, because it's just autocomplete. That's a problem. These systems just deliver An Answer and it's just the one answer. Go talk to a lawyer (not a robot lawyer) about anything and they will tell you well, this depends on the circumstances and it's complicated.
This is why folks have noted that ChatGPT is absolutely certain when it is wrong. The design is unable to handle "not sure" because there is no actual reasoning going on, it's just ok this word probably follows the last word, statistically. It's not AI. It's just fancy autocomplete backed by billions of dollars.
The OpenAI folks doing increasingly fancy autocomplete saying this is a future path to a General Purpose AI or their dreaded Roko's Basilisk are full of shit. If I was to be kind, I'd suggest they should stop huffing each others farts.
Worse still you have folks suggesting that this spicy autocomplete could offer legal advice or medical advice, and holy hell no. Tests of ChatGPT to ask for legal advice finds that it just makes up legal citations that don't exist. The folks who are using this to provide medical advice (apparently without consent!) are terrible people who endanger lives.
Anyway, ChatGPT is a dead end if you actually wanted to build expert systems that solve real world problems. It's a party trick, like teaching your parrot to yell a swear word.
Unsurprisingly the exact same folks who backed crypto and web3 are promoting the latest thing.
It's just a fancy version of autocomplete. Yes it might help you reply to your manager's email asking what did you do last week with a form response but it's not actually that interesting as far as AI.
As a final note the SEO spammers will use this to produce useless content, and the GPT systems will ingest this rubbish that is their own output that's on the web, and then produce more rubbish that is trained on its own output. That's going to end up increasingly bizarre results. Should be fun.
Anyway, it's all nonsense and no the AI is not coming to eat your brain.