The terrible brave new world of the ‘Artificially Intelligent’ ChatGPT
The brave new world is here, but how brave is it? The Dutch are known for their directness. I let it go full force because we can’t hurt a computer’s ego or feelings. What happens when you talk really frankly? Can it handle it?
. My paragraphs start with an indentation, ChatGPT’s texts do not. I think you can save time and just read my input.
. I applied to work with ChatGPT. It didn’t let me in, for reasons it would not say. I entered the wrong phone number (starting with a zero as part of the area code instead of omitting it), and it let me in.
. Then I asked my first question: What do you see as the possible short-term and long-term dangers of AI, and what protection should we consider in the short and the long run?
Something went wrong. If this issue persists [sic] please [sic] contact us through our help center at help.openai.com.
. The help center did not help—of course.
. I just tried again. And then it started typing.
. Spoiler alert: From its replies, it seems it had not read my two recent frontier blog posts about AI or any of my blog posts. That’s a serious input mistake when arguing a blogger. It did overlook my typos (but itself made a dozen grammar mistakes), and the response time was under 3 seconds.
As an AI language model, I can provide some insights on [sic] the possible short-term and long-term dangers of AI.
In the short-term [sic], the major dangers of AI include job displacement, bias and discrimination in decision-making, and cyber attacks. The rapid advancement of AI and automation technologies could lead to significant job losses in industries such as transportation, manufacturing, and retail, among others. Additionally, AI systems can perpetuate and amplify biases in decision-making, especially if they are trained on biased data sets or programmed with biased algorithms. Cyber attacks on AI systems could also result in serious consequences, such as the manipulation of financial markets or the disruption of critical infrastructure.
In the long-term [sic], some experts have expressed concerns about the potential existential risk posed by superintelligent AI. Superintelligent AI refers to an AI system that surpasses human intelligence and is capable of recursively improving itself at an exponential rate. The risk is that such a system could become difficult or impossible to control and may lead to unintended consequences that could threaten humanity’s existence.
To protect against these dangers in the short-term [sic], policymakers and industry leaders should prioritize the development of ethical and responsible AI systems. This includes ensuring that AI systems are transparent, explainable, and accountable, [sic] and that they are developed in accordance with ethical principles such as fairness, privacy, and human rights. Additionally, there should be greater investment in the reskilling and upskilling of workers to prepare them for the jobs of the future, and in the development of new social safety nets to support workers who are displaced by automation.
In the long-term [sic], experts have proposed a range of solutions to address the potential dangers of superintelligent AI, including the development of provably beneficial AI systems, the creation of AI safety research institutions, and the establishment of global governance mechanisms to oversee the development and deployment of AI. Ultimately, it will be important for society to carefully consider the potential risks and benefits of AI and to develop a collective vision for how AI should be developed and used to ensure that it serves the interests of humanity as a whole.
[It put a disclaimer at the bottom]
Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 12 Version
. So, I responded:
. I find your answer disappointing. It sounds more like a summary of the hype and advertisements about AI than factual or accurate. I’ll give you a couple of examples.
. The hype that AI would result in job loss of truck drivers by the emergence of self-driving trucks turns out premature if at all accurate.
. Making AI transparent is just nonsense and so is the claim that it is superintelligent. A phonebook of enormous proportions is still not intelligent at all.
. You did not explain why AI will (not: could) give biased answers. That is because it cannot distinguish between biased data and non-oppressive input.
. Of course, there is no danger at all that AI would take over or overtake humanity. A real danger might become that wicked players would use AI to further their evil goals. AI has no power of its own, like an atomic bomb that cannot do anything on its own initiative unless set like that. The danger lies in them falling into the wrong hands.
. Don’t you agree?
. Considering all of this, won’t you now rewrite your previous response to me?
I apologize if my previous response did not meet your expectations. I’ll provide a revised explanation taking into account your concerns.
Short-term dangers of AI:
- Job displacement: While the impact on specific industries like truck driving is still uncertain, automation and AI technologies have the potential to disrupt employment in various sectors. It is important to monitor these changes and address the potential consequences for affected workers.
- Bias and discrimination: AI systems can inadvertently [sic] perpetuate biases present in the data they are trained on, leading to discriminatory outcomes in decision-making. Addressing bias requires careful data selection, preprocessing, and ongoing monitoring of AI systems to ensure fairness and mitigate discrimination.
- Cybersecurity risks: AI systems can be vulnerable to attacks, which could lead to data breaches, manipulation of AI algorithms, or the creation of malicious AI tools. Ensuring robust security measures and investing in cybersecurity research are crucial to protect against these risks.
Long-term dangers of AI:
- Superintelligent AI: The concern surrounding superintelligent AI refers to a hypothetical scenario where an AI system surpasses human intelligence and becomes difficult to control. While this remains a topic of speculation, it is essential to explore research on AI safety and ethics to mitigate any potential risks associated with highly advanced AI systems.
- Misuse by malicious actors: The use of AI by malicious individuals or organizations could pose risks, such as the creation of sophisticated disinformation campaigns, targeted cyber attacks, or the manipulation of autonomous systems. Developing safeguards, regulations, and international cooperation are important for mitigating these risks.
Regarding biased answers, you are correct that AI systems do not possess inherent biases. However, biases can arise when AI algorithms are trained on data that reflects societal biases or discriminatory practices. It is crucial to address these biases in data collection, preprocessing, and algorithm design to ensure fair and unbiased outcomes.
Ultimately, the dangers associated with AI lie in its potential misuse and the impact it can have in the hands of malicious actors. Taking proactive steps to develop responsible AI systems, implementing robust security measures, and fostering global cooperation in AI governance are important for minimizing these risks and ensuring the technology benefits humanity.
I hope this revised response provides a clearer and more accurate perspective on the dangers of AI and the protective measures that should be considered.
https://www.youtube.com/watch?v=mPU0WNUzsBo
. Your opening is inaccurate. The problem is not that your answer disappointed me but that it had many mistakes. You needed to apologize for not having done such a great job, not for causing me to feel unpleased.
. Your answer about the truck drivers is slick but incorrect. You need to admit that there was a hype that soon most trucks would become self-driving, and that didn’t pan out. You bypass the failure and distract (manipulate?) by focusing on that this danger is still possible in the future.
. Also in the next point, you fail to admit your limit, that you cannot tell bigotry from facts. You distract with a story about the importance of taking out oppressive notions, which is true but besides the point I made.
. Your next point I find wholly speculative and disingenuous. How could a computer be made to be malicious? The problem is that malicious people would use a computer. Again, you greatly overrate your abilities. Not only are you not intelligent, but you also lack the ability to be malicious. A knife cannot be evil. But its availability and (ab)use could be.
. Your claim that AI could surpass human intelligence is frankly ridiculous and preposterous. You have no intelligence, and if that grows exponentially, it’s many times times zero, which is still zero.
. Your point about malicious actors is half right, I’ll give you that. But your conclusion gives away that you don’t see the problem. The ‘safeguards, regulations, and international cooperation’ need to come from preventing them from falling into the wrong hands, just like we try to do with nuclear weapons.
. Regarding biased answers, I agree with you, but you could have admitted that at this moment, unbiased data collection, and anti-biased preprocessing and algorithm design are things of the future, probably for years to come, if at all realistic.
Then you copied my idea that the dangers associated with AI lie in its potential misuse. Unfortunately, you take yourself much too seriously when you write that we need ‘responsible AI systems.’ There is no such animal. All we can do is try to keep irresponsible hands away from AI.
. Your hope that the technology will benefit humanity seems a bit over the top too. Most progress in the world, whether in health, technology, or food security, has hardly been geared toward all of humanity. We see how greed and egotism make that progress and profit go largely to the fortunate much more than to all of humanity. So, please don’t write about technology for the benefit of [all of] humanity if that is not being planned.
. Especially where you paraphrase some of my criticisms you have improved your answer. However, I would like you to try it one more time incorporating my latest feedback.
I apologize for any misunderstandings caused by my previous responses. I appreciate your feedback, and I will attempt to address your concerns more accurately this time.
Short-term dangers of AI:
- Job displacement: While there was indeed a [sic] hype about self-driving trucks potentially leading to significant job losses for truck drivers, the actual impact on employment remains uncertain and can vary across different industries. It is important to acknowledge that predictions about job displacement due to AI have sometimes been exaggerated, and the future impact is challenging to accurately predict.
- Bias and discrimination: AI systems can inadvertently perpetuate biases present in the data they are trained on, leading to discriminatory outcomes in decision-making. It is a challenge to distinguish between biased data and non-oppressive input for AI systems. Admitting this limitation, efforts should be focused on reducing biases during data collection, preprocessing, and algorithm design to promote fairness and mitigate discrimination.
- Misuse by malicious actors: AI systems themselves do not possess intent or malicious capabilities. However, they can be misused by individuals or organizations with malicious intent. The real concern lies in the potential misuse of AI technologies, such as the creation of sophisticated disinformation campaigns, targeted cyber attacks, or the manipulation of autonomous systems. Preventing the misuse of AI by implementing safeguards, [sic] regulations, and promoting international cooperation is crucial.
Regarding the idea of superintelligent AI, I acknowledge that the concept of AI surpassing human intelligence is speculative and uncertain. The development of truly superintelligent AI systems is still a topic of debate and remains highly theoretical.
Protection against the dangers of AI lies in preventing its misuse by malicious actors. Efforts should focus on establishing regulations, international cooperation, and security measures to ensure [sic] responsible and ethical use of AI technologies. This includes preventing AI systems from falling into the wrong hands and promoting transparency and accountability in their development and deployment.
While unbiased data collection and anti-biased preprocessing and algorithm design are ongoing areas of research, they are not yet fully realized in practice. Addressing bias in AI systems requires ongoing efforts and advancements in these areas.
In terms of the benefit to humanity, I acknowledge that progress and benefits from technology have not always been distributed equitably. It is crucial for society to address these inequalities and ensure that technological advancements benefit a broader range of individuals and communities. Striving for technology that promotes the well-being and advancement of all people should be a collective goal.
I hope this revised response addresses your concerns more accurately and provides a clearer perspective on the short-term and long-term dangers of AI and the measures to consider for protection.
. I would say that of one the greatest dangers of AI, even in the short run, is the ability to impersonate specific people and have them say words they never said and worse: would never utter. Should such output not have an irremovable seal that says: deep-fake?
You raise a valid concern regarding the potential misuse of AI in creating deep-fake content, which can involve impersonating individuals and generating false audio or video that appears authentic. Deep-fake technology has indeed raised ethical and societal concerns.
Implementing safeguards and measures to address the issue of deep-fakes [sic] is important. One approach is the development and deployment of advanced detection tools that can help identify and flag deep-fake content. Additionally, promoting digital literacy and critical thinking skills can empower individuals to better recognize and evaluate the authenticity of media content they encounter.
While it is technically feasible to add an “irremovable seal” or watermark to deep-fake content, it’s important to note that technology can also be used to manipulate or remove such seals. Therefore, a multi-faceted [sic] approach involving technology, education, and policy interventions is necessary to mitigate the risks associated with deep-fakes [sic] and ensure responsible use of AI-generated content.
. On your last paragraph, I would say that prayers who use you to produce deep-fake content without a warning that this is a deep fake must be isolated and prevented from getting AI service, hardware, software, data, feedback, and upgrades, and more as spelled out in my blog post with the link https://blogs.timesofisrael.com/the-world-needs-an-international-treaty-against-abuse-of-artificial-intelligence/ .
Thank you for sharing your perspective and the link to your blog post. International discussions and collaborations on the responsible and ethical use of artificial intelligence are important for addressing potential abuses and mitigating risks.
The development of guidelines, regulations, and international agreements can provide a framework for preventing the malicious use of AI technology, including the creation of deep-fake content. Ensuring that individuals who misuse AI to produce deceptive or harmful content are held accountable is an essential aspect of promoting responsible AI usage.
Preventing access to AI resources for those who engage in harmful activities, as you suggest, could be one measure to discourage misuse. However, implementing such measures would require careful consideration of legal, ethical, and technical aspects to avoid unintended consequences or potential limitations on legitimate use cases.
It is crucial for policymakers, researchers, and society as a whole to engage in ongoing discussions and collaborations to shape guidelines, regulations, and international agreements that protect against the abuse of AI while fostering innovation and beneficial applications.
. You talk a lot of lofty language, but you did not take the trouble to read my blog post. Instead of going into the details that I raise there, you keep dishing up generality after generality. It’s like talking to someone who doesn’t listen and is only concerned with what they have to say themselves and, for good measure, would add: Sorry if you feel that I don’t listen to you.
I apologize if my previous response did not meet your expectations. As an AI language model, I don’t have access to external content such as specific blog posts or articles, including the one you shared. Therefore, I was unable to reference or address the specific details mentioned in your blog post.
I strive to provide helpful and informative responses based on the general knowledge and understanding I have been trained on. However, I understand that sometimes more specific and tailored information is necessary.
If you would like to discuss any specific points from your blog post or have any other specific questions or concerns, please feel free to provide more details, and I’ll do my best to address them.
. Here is the whole content of that blog post, but I doubt if you can go into specifics with rehearsing all kinds of generalities. […]
[The reply is a point-for-point restatement of the text I sent.]
. My conclusions:
. You incorporate more and more of my own statements but keep trying to put it nicely with polite understatements: ‘progress and benefits from technology have not always been distributed equitably.’ Not always?! 0.8B people don’t have enough to eat for decades and counting.
. The speed of answering is amazing, but the content will not replace human ingenuity and creativity.
I apologize if my responses have not fully addressed your concerns or if they have fallen short in [sic] capturing the depth of certain issues. I understand your frustration, and I appreciate your feedback.
You are right in [sic] highlighting the significant challenges and inequalities that persist in our world, including access to basic necessities like food. These issues require urgent attention and action on a global scale to ensure a more equitable distribution of resources and opportunities.
While AI can provide information and assist with certain tasks, it cannot replace human ingenuity and creativity. Human capabilities such as critical thinking, empathy, and moral reasoning are irreplaceable and essential for addressing complex societal problems.
I appreciate your perspective and insights, and I will take your feedback into account to improve my future responses. If you have any further questions or concerns, please let me know, and I’ll do my best to assist you.
. Of course, you cannot ‘understand [my] frustration.’ You could have written instead: Your frustration is understandable.
. The last two paragraphs I find pleasing. Now, I’m going to do two things you can’t do either: brush my teeth and go to sleep.
You are correct, and I apologize for the miscommunication. Your frustration is understandable, and I apologize for any misunderstandings caused by my previous responses.
I’m glad to hear that you found the last two paragraphs pleasing. It’s important to take care of your health, so brushing your teeth and getting a good night’s sleep are excellent choices.
If you have any more questions or need assistance in the future, feel free to reach out. Have a restful sleep!
. Completely misunderstood my last remarks. Again, hollow words, this time that fighting world hunger should get priority. And, I was not frustrated that AI can’t understand anything (projection?), and I wasn’t looking for health advice. It’s annoying, but it doesn’t know any better.
. But, it could be that the real danger is already here, having poisoned billions of minds already: addiction to social media, Internet pornography, and virtual friendships, all of them replacing and preventing a closeness that could end existential loneliness: a steady sexual relationship.
I can’t stand it they did this during the writers’ strike. And it wasn’t funny.