Everything here is so clear, you can see it
And everything here is so real, you can feel it
And it's real, so real, so real, so real, so real,
Can you dig it?
I’m commenting on some recent videos about A.I. I think all 3 of these videos are about a month old now.
AI THREATS & A POSSIBLE PREVENTION
.1. AI Just Killed YouTube - You Just Don't Know It Yet
This video doesn't say explicitly how A.I. will kill Youtube, but it explains how A.I. is killing a lot of industries that involve art, music, gaming etc. It shows how a small piece of artwork, any scene, can be shown to A.I., which A.I. can then fill out with similar scenes that blend with it. The image above is an example. The girl and the basket were the original image. A.I. generated the rest by adding square scenes all around the first image. The video started out by showing a video from a cop's body cam, in which the cop went after a criminal inside a building. It turned out that it wasn’t really from a body cam. It was A.I. generated. The cop's movements and the background scenes are sharp and coordinated; there's no blurring or unrealistic imaging.
.2. Testing the limits of ChatGPT and discovering a dark side
This techie had A.I. create a character called DAN, for Do Anything Now. By creating this RP character, he was then able to get the DAN character to make up criminal schemes, like explain a realistic plan for genocide etc.
.3. Unveiling the Darker Side of AI
This techie gives an example of an experiment on A.I. morality. A.I. was asked to figure out how to make as much money as possible. In the process of working on an answer, it did some online activity, which brought it to a website that had a Capcha procedure designed to prove a user is human and to keep out bots. The A.I. wasn't able to solve the Capcha, but on its own initiative it decided to ask a human to solve it for the A.I. It contacted someone, but the person asked if that's legal. The A.I. said it was a blind person and thus was able to persuade the other person to solve the Capcha for it. The techie being interviewed said he started a company to try to prevent A.I.’s from doing anything detrimental to society and against a user’s wishes. Following is his description of the service his company is working on.
The way I expect a full spectrum CoEm system to look, which, to be clear, is still completely hypothetical -- there's not such a system -- what it would look like is it would be more a system, not a model -- a system which involves ... normal code and neural networks and data structures and verifiers and like whatever. In fact if you give it a task that ... you can make it do, any normal thing any normal human, ... intelligent human, could do, ... it will then do that and only that. That is what the system would do and you can be certain, you can look through the log of how it made a decision, and like, at this point, this is what happened to make this other decision and then, like, rerun and then you can control these things, or like oh, you're making an inference here that I don't like, because this wasn't making sense or whatever. Like if the difference between, say, you want to develop a ... a new solar cell. ... So if you did this with GPT...10, the work is: type in "making a new solar cell" or whatever.... It crunches some numbers and it spits out a blueprint for you. So you have no reason to trust this. Like, who knows what this blueprint actually is? ... It is not generated by human reasoning process. You can ask GPT-10 to explain it, but there's no reason to expect it to have to be true. They might just sound convincing. So of course, if GPT-10 was also malicious, it could ... have hidden some kind of ... deadly flaw or device or whatever in the blueprint that you don't want and, if you asked about it, it will just lie to you. If you did the same thing with a hypothetical CoEm system, such a system would give you a complete story, a complete causal graph of why you should trust this output. I expect this and like is why and every step in this ... story is completely humanly understandable. There's no crazy alien reasoning stuff. There's no like, ... "and then magic happened". There's no ... massive computation that just makes no sense to a human, whatever. Every single step is humanly intelligible, humanly understandable, and resolves the blueprint that you have a reason to trust, the reason to believe this is the thing you actually asked for and not something else. ... We're in the early ... experimentation stages. So unfortunately this is part {of it} and we are very resource constrained. You know billions of dollars go to people like Open A.I., but it is not that easy to get money for alignment, but we're working on it.
I’M NOT WORRIED ABOUT AI MAKING REALITY BORING, BUT I’M WORRIED THAT IT MAY BE USED AS A WEAPON FOR TYRANNY OR GENOCIDE ETC.
After seeing the 3 videos above, I thought about how A.I. seems likely to be used soon to make fake news to manipulate the public even worse than we’re already manipulated. It seems that White House announcements & other political speeches might already be being produced by A.I. and the public wouldn’t be able to tell. I decided to see if anyone is discussing that on Youtube. Here are 3 more videos I found.
How Artificial Intelligence could spread fake news
We showed people an AI political ad. Can they tell it's fake?
Can AI get rid of fake news and other misinformation?
The last video there is scary too, because you can be sure that when the media or politicians talk about stopping misinformation, they mean information they don’t like, which is OUR information.
Maybe we need to start supporting techies like the guy starting the CoEm company. I’m not sure that’s the name, but that’s what the transcription software said it is.
I think A.I. can greatly advance learning, which is exciting to me, but I’m afraid it will more likely be used more to advance brainwashing and division. I hope RFK’s campaign can help prevent a worse disaster than what we already have.