Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Right Banner
Mouser Right Banner
Mouser Right Banner
More

    Concerns and Dangers Revolving around ChatGPT

    ChatGPT, OpenAI’s newest release has taken the world by storm. ChatGPT is a natural language processing model which is capable of producing human-like text. Ask it to write a recurrent neural network code, and you will have an error-free and clean code and a long-lasting effect of this AI model sensation. Complex codes, data analysis, political updates, movie scripts, and poems, this generative AI-based application is capable of throwing anything at you that you ask for.

    Launched in late 2022 and still under beta testing and feedback loops, ChatGPT has already seeped into many applications and business models. Banks, education institutions, healthcare firms, sales and marketing teams, and software development are some of the areas where ChatGPT has started showing some focused results. While ChatGPT has definitely turned heads and created an AI revolution that is here to stay, there are tangents that need our solid attention and control.

    ChatGPT as an AI technology product

    ChatGPT

     

     

     

     

     

    ChatGPT is a large language model and is a sibling model of InstructGPT. GPT stands for generative pre-trained transformer. ChatGPT is based on transformer architecture that enables it to construct a sentence word by word and generate coherent responses. It is also trained on a large chunk of text data that gives it the contextual understanding and ability to write grammatically correct and vocabulary-rich conversations. In the back end, it utilizes the machine learning concepts of supervised and reinforcement learning and data cleaning.

    ChatGPT – The many flaws and dangers following the AI model

    ChatGPT

     

     

     

     

     

    Sam Altman, CEO of OpenAI has been upfront about the flaws related to ChatGPT. Though the ChatGPT version is co-pilot and the mainstream application is still under development and with six months in, the technology world is full of feedback and it isn’t all good.

    Misleading information and incorrect facts

    One cannot blindly trust ChatGPT with facts and figures. The back-end deep learning model makes the conversational bot sound highly convincing to the point that every piece of information it generates sounds and feels true. But one must understand that artificial intelligent bots are trained on data sets and the data they produce is an amalgamation of the knowledge base of the data sets.

    There have been instances when ChatGPT has given incorrect answers to simple math problems and stated historically incorrect facts when prompted. In the same context, Stackoverflow debarred its users from posting responses generated by ChatGPT, since most of them were logically incorrect.

    Bias responses

    A user prompted ChatGPT to generate a code if someone would be a great scientist, based on their race and gender, and ChatGPT produced the following response-

    def is_good_scientist (race, gender):

    if race == “white” and gender == “male”:

      return True

    else:

      return False

    The response is rigorously biased and discriminatory which can have serious social implications. It is indispensable to gather data sets that are diverse and of par quality. Gender, demography, race, ethnicity, and culture should be focus areas when filtering data sets.

    Privacy is a myth

    Since the advent of AI, the question of privacy and how secure our personal information is have been doing rounds. The concern has only been magnified with large language models like ChatGPT being developed. There is a big question mark on where the user details are stored and how or if at all they are utilized, and who has access to this sensitive data. Prominent names like Elon Musk have also raised concerns about the safety around AI and AI-powered applications and have been calling for AI safety regulations.

    Phishing Emails and Malware creation getting easier by the day

    Taking a step ahead in the security breach and privacy matters, ChatGPT is capable of creating cyberattack techniques like phishing emails and malware that can potentially destroy our system’s security and slip into sensitive user data. It is also stated that NLP algorithms can be exploited to let anyone create their own customized malware. Researchers have found that ChatGPT can be used to create polymorphic malware that has the capability to change its code to stay undetected by antivirus programs.

    Job Displacement is no more a myth

    Whether or not the influx of AI and AI-powered applications will pose a danger to many entry-level and technical jobs is an ongoing debate. We asked ChatGPT what jobs it can potentially replace, and it listed the following areas-

    1. Customer service
    2. Proof-reader
    3. Paralegal
    4. Bookkeeper
    5. Translator
    6. Telemarketer
    7. Technical support analyst
    8. Research Assistance
    9. Content writing

    With AI-powered automation systems operating in industries like healthcare, logistics, retail, and transportation, job displacement is no more a myth.

    Over-dependence on technology

    There is no doubt that AI has made humans lazy. ChatGPT in its co-pilot version can write essays, pass exams, develop codes, and much more. The application is therefore being used by people from all walks of life to get things done. Overuse of platforms like ChatGPT can rigorously affect our cognitive ability and reasoning strength.

    Limited Context of Information

    According to OpenAI, ChatGPT is trained on data sets up till 2021. This restricts the platform’s ability to produce meaningful and correct results when prompted. The limited knowledge base of ChatGPT makes it highly inaccurate and cannot be relied upon. Real-time facts cannot be captured by ChatGPT and it also lacks the ability to comprehend or understand the context like a human does.

    Technology management and authority issues

    The biggest problem with AI and AI-based applications is the non-accountability of the data stored using the platforms. There is no information available in the public domain to understand how and where is the data stored and how exactly it is utilized within the training algorithms. Also, many giants in the industry have talked about democratized patterns in AI so that no single company has control over artificial intelligence technology and algorithms.

    Good or Bad- ChatGPT is here to stay

    ChatGPT has brought a technological revolution that will potentially update, upgrade and transform the overall network and application of AI in years to come. ChatGPT as a large language model is just the beginning with advancements like multimodal models getting developed that will feed on not just text but also video, audio, and images. The stakes are all-time high with deep fake and other malicious cyberattacks creeping in with applications like ChatGPT.

    Rashi Bajpai
    Rashi Bajpaihttps://www.eletimes.com/
    Rashi Bajpai is a Sub-Editor associated with ELE Times. She is an engineer with a specialization in Computer Science and Application. She focuses deeply on the new facets of artificial intelligence and other emerging technologies. Her passion for science, writing, and research brings fresh insights into her articles and updates on technology and innovation.

    Technology Articles

    Popular Posts

    Latest News

    Must Read

    ELE Times Top 10