Support
Forums

AI, ML and Privacy

For anyone who’s tried ChatGPT (or Bard) it comes across as a very useful tool, maybe not as useful as Hyped (which is relatively obvious to anyone who’s asked it to generate code) but still useful. I’m struggling a little to understand ho...

For anyone who’s tried ChatGPT (or Bard) it comes across as a very useful tool, maybe not as useful as Hyped (which is relatively obvious to anyone who’s asked it to generate code) but still useful. I’m struggling a little to understand how it’s sustainable, or ever was as it seems to be at odds with what we currently deem acceptable.

After jumping through hoops with certain large social media companies, repeatedly in many cases, we seem to have established that you can’t copy other people stuff without asking and paying, and you can’t retain their stuff if they ask you to delete it.

Problem # 1; these AI’s have been trained on data from the Internet, apparently with no thought to privacy , copyright or permission. I’m guessing this was done because it was the only practical way to do it. (and the Internet is full of other people’s stuff)

Problem #2; we now have the right to be forgotten. However current Large Language Models learn like people and just like people, they can’t unlearn stuff because what you know is all interdependent and interrelated.

So it would seem that even if they started from scratch and were somehow able to screen every single item of information going into the system so there was nothing in there that breached anything, the first time the screening process made a mistake, or the first time someone successfully challenged an item of data (probably daily) you’d be back to scratch again.

This isn’t new information, they would have known this before writing a line of code, and before raising 10’s (or 100’s) of billions of $'s in funding. What am I missing? Did they do this deliberately knowing they could raise lots of money before being stopped?, did they really just not think about it? Anyone have any thoughts, because on the one hand it is quite useful, indeed there is an AI “bot” available for this forum. On the other however, because the bot can answer personal messages, it would potentially have access to non-public information, which sounds like a really bad idea, all things considered.

… and the list goes on …

1 post - 1 participant

Read full topic