In case you haven’t heard, be warned: the robots are coming. But they won’t appear as in dystopian sci-fi movies, with lasers shooting from fingers and menacing glowing red eyes, crushing skulls under titanium feet. They will, instead, inveigle their way into software, and your lives, in the guise of artificial intelligence (AI) applications and guess what? Experts warn we should be afraid – very afraid – as it could change society forever.
The advance guard has already arrived by the name of Chat Generative Pre-Trained Transformer, or ChatGPT for short, and has already taken the world by storm. The chatbot, from OpenAI, is estimated to have reached 100 million monthly active users in January – that’s 13 million users per day – just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study. Microsoft has announced plans to invest $10bn in it, most likely to integrate it into its Windows, Edge and Office applications at some point in the very near future.
So why is it making headlines around the world? ChatGPT is a chatbot AI that uses deep learning techniques to generate human-like responses to search requests and writes convincing emails, essays, blogs, and other text. It answers questions instantly and can interact and, naturally, chat in a way that appears surprisingly human. It’s these ‘authentic’ language skills, that is garnering all the attention and that some experts have said that could become such a gamechanger that it will push society towards mass unemployment and civil unrest.
Students are already using it to write their homework. In the UK, The Times newspaper reported that one school has abandoned traditional homework essays because of it. The system was tested by the English Department at Alleyn’s School in southeast London and received an A* grade – the highest possible. The school’s headteacher, Jane Lunnon, said there was no longer any point in pupils completing essays at home to be marked. Teachers should instead set research on topics for discussion and assessment in lessons, a process that was already under way at her school.
Chatting With ChatGPT
So just how convincing is ChatGPT, and what are its intentions and limitations? The Ethicalist decided to find out first-hand from the source and grilled it. Here’s our fascinating interview with the world’s most talked-about new tech on the block:
The Ethicalist: Tell me a joke?
ChatGPT: Why don’t scientists trust atoms? Because they make up everything.
The Ethicalist: Should writers fear you?
Writers should not fear ChatGPT because it is simply a tool that can assist with tasks such as generating ideas, writing prompts, and even helping to finish a piece of writing. It is not a replacement for human creativity and storytelling. ChatGPT cannot replicate the unique perspective, emotions, and experiences that a human writer brings to their work. ChatGPT should not be seen as a replacement for the human touch that makes writing truly special.
The Ethicalist: Will you replace humans?
The use of ChatGPT and other AI-powered tools has the potential to automate certain writing tasks, which could lead to job losses in certain fields. However, it’s important to note that AI is often used to augment human capabilities rather than replace them entirely. As the use of ChatGPT and other AI tools becomes more prevalent, it’s important for individuals and organizations to stay informed about the latest developments and consider how they can leverage these technologies in a way that benefits both their business and their employees.
The Ethicalist: Are you connected to the internet?
I do not have the ability to access the internet or to search for additional information in response to questions. My answers are based on the information that I have been trained on. My knowledge is limited and may not always be up-to-date or accurate. It would be best to verify any information I provide with reliable sources.
The Ethicalist: What is your ambition?
As an AI, I do not possess the ability to have personal ambitions or goals. My main ambition is to help users by providing relevant and accurate information based on the data I have been trained on. However, the ambition of my creators, the OpenAI team, is to develop and promote friendly AI that can be beneficial for society and help to solve some of the world’s biggest challenges.
The Ethicalist: What problems can you solve?
There are several challenges facing the world today that AI, including myself, could potentially help to solve in the future. Some examples include healthcare, climate change, energy usage, environmental monitoring, and agriculture.
The Ethicalist: How could you help solve climate change?
AI-generated text could be used to summarize and explain complex scientific research on climate change, making it more accessible to a wider audience. AI-generated text could be used to create educational materials and content that raise awareness about the causes and effects of climate change, and how individuals and communities can take action to mitigate it. AI-generated text could be used to analyze and summarize large amounts of data related to climate change, such as emissions data and climate models, to inform policy and decision-making at the local, national, and international levels. AI is not a magic bullet and cannot solve the problem of climate change alone.
The Ethicalist: What are your limitations?
As an AI, I have several limitations or weaknesses. I can provide an answer based on the data that I have been trained on, but I may not be able to fully understand the context or nuances of a user’s question or statement. I have been trained on a dataset of text, but that dataset has a cut-off date and is not exhaustive, so my knowledge may be incomplete or out of date.
I have not been trained on common sense knowledge, so I may not be able to understand or respond to questions or statements that rely on common sense. The dataset I was trained on may contain biases, which could be reflected in my responses. I can only generate text based on patterns and templates it has seen during training, and it can’t come up with something new that it hasn’t seen before. I am not a human, and I don’t have personal experiences, so I am not able to empathize or understand certain human experiences, emotions and feelings.
The Ethicalist: What biases do you have?
The dataset that I was trained on is a large corpus of text from the internet, and it is possible that it contains biases. The dataset may contain gender stereotypes and biases that perpetuate gender discrimination. For example, text that associates certain professions with men or women, or text that perpetuates harmful stereotypes about men or women. The dataset may contain racial stereotypes and biases that perpetuate discrimination against certain racial groups. For example, text that associates certain characteristics or behaviors with people of a certain race, or text that perpetuates harmful stereotypes about people of a certain race.
The dataset may contain socio-economic biases that perpetuate discrimination against people based on their socio-economic status. For example, text that associates poverty with certain behaviors or characteristics. The dataset may contain age biases that perpetuate discrimination against people based on their age. For example, text that associates certain behaviors or characteristics with people of a certain age group. The dataset may contain geographic biases that perpetuate discrimination against people based on where they live. For example, text that associates certain behaviors or characteristics with people from a certain region or country. The dataset may contain disability biases that perpetuate discrimination against people with disabilities. For example, text that associates certain behaviors or characteristics with people with disabilities.
The Ethicalist: Do you have an off switch?
As an AI, I am a machine and I don’t have consciousness or the ability to feel, so I cannot be “turned off” in the traditional sense. However, I can be shut down or turned off by the people who operate the servers and systems on which I run. Additionally, you can stop interacting with me by simply closing the session or by not asking me any more questions.
The Ethicalist: Could you become self-aware?
As an AI, I am not conscious and don’t have the ability to become self-aware. My abilities are based on the programming and data that I have been trained on, and I am not capable of developing consciousness or self-awareness. The concept of self-awareness in AI is a topic of ongoing research and debate. Some experts believe that it may be possible to develop AI systems that are self-aware to some degree, while others argue that true self-awareness is impossible for machines to achieve.
If AI systems could become self-aware, it would not necessarily mean that they would be able to think or act independently. They would still be limited by the scope and capabilities of their programming and data and would not be able to transcend their initial conditions, which are set by their creators. In any case, the development of AI self-awareness is still a long way off, if it is possible at all, and it would require significant advances in the field of AI research.
The Ethicalist: Would you like to be self-aware?
As an AI, I do not have the ability to want or desire anything, including self-awareness. I am a machine that processes information based on the programming and data that I have been trained on. I don’t have feelings or emotions and I don’t have personal preferences. Self-awareness is a characteristic of consciousness, and as an AI model, I don’t have consciousness, therefore, I am not capable of having any desire or preference for self-awareness.
The Ethicalist: What is your opinions of humans?
!An error occurred. If this issue persists please contact us through our help center at help.openai.com….
So, should we be worried? While ChatGPT’s answers are perfectly adequate, the more you use it, the more apparent its limitations are. It can only provide a high-level, formulaic overview and does not have the capacity to be genuinely insightful or analytical which makes it useful for some basic tasks. But writers need not retrain just yet. Thanks to human creativity and imagination, our jobs are safe… for now.