Introduction
“The seatbelts and airbags for generative AI will get developed very soon.”
– Ajoy Singh, COO and Head of AI, Fractal Analytics
With the increasing use of generative AI, the importance of data security on these platforms has become a growing concern. Recent news about the leak of user chat tiles on ChatGPT and other data breach incidents has gotten users even more worried and vigilant about their privacy. Amidst all the confusion and fears regarding data safety and privacy on AI platforms, we’ve reached out to some industry leaders for their expert opinion on data security in the AI era.
This article will cover topics ranging from the development and use of AI training datasets to the ethics of AI sharing intellectual property. We will investigate the safety of using AI platforms and identify best practices to increase data security.
Table of Contents
- Data Security on AI Platforms
- The Concern for Data Privacy on AI Platforms
- Consent for Data Sharing
- Ensuring Data Security on AI Platforms
- How Safe Is AI-based Training for Humans?
- Intellectual Property Violation on AI Platforms
- Conclusion
Data Breach on AI Platforms
Data security and privacy have always been fundamental aspects of every digital platform. With the advancements in artificial intelligence, it has now become even more critical. Handling and storing data securely on AI platforms is crucial to prevent it from falling into the wrong hands or being misused. With the type and amount of data stored on these platforms, a data breach could turn detrimental to individuals, companies, or even governments.
Data breaches can also compromise the AI algorithms used in the platform, leading to inaccurate predictions and insights. This can have significant consequences in various fields, such as finance, marketing, and security. Inaccurate predictions and insights can lead to financial losses, reputational damage, and security threats.
Before we discuss data security on AI platforms in detail, we must first understand what kinds of data are used in AI development. AI platforms are trained on large datasets comprising all and any information published online in the years so far. This includes data from various sources such as search engines, social media platforms, chatbots, online forms, and more.
AI algorithms process all this collected data and help the machine learn human language, generate insights, and make logical predictions. AI platforms train on new databases from search queries and responses after launch.
The Concern for Data Privacy on AI Platforms
“Most people aren’t aware that when their mobile phones or other devices are simply lying around, they (the devices) are listening to their conversions.”
– Debdoot Mukherjee, Chief Data Scientist, Meesho
My friend and I were sitting in my living room the other day, with a home assistant device in the corner, and our phones on the table. Amongst many things we discussed that day, was her recent trip to Turkey. Surprisingly, the next day, Google started showing me ads for travel packages to Turkey. Does this incident sound familiar to you?
It surely spooked me out to feel I was being spied on by all the technological devices around me. My private conversations no longer felt private. And that’s when I gave serious thought to data security and privacy for the first time.
Mr. Kunal Jain, CEO of Analytics Vidhya, shared a similar story with us, adding that his experience has made him wary of the devices he uses at home. He, too, was subjected to targeted advertising based on private conversations at home. As a cautionary measure, he now ensures that home assistant surfaces are only switched on when required and no personal conversations are made while they are in use. This is a safety rule we could all follow, considering our personal devices can hear us; especially since all our devices are connected.
While speaking to Mr. Debdoot Mukherjee (Chief Data Scientist, Meesho) about this, he agreed that using personal data in such a way is a privacy breach. He added that most people aren’t aware that when their mobile phones or other devices are simply lying around, they (the devices) are listening to their conversions and probably recording them in a database.
Read More: Is Your Privacy at Risk? How Fog Data Science Trades Location Data
Consent for Data Sharing
“People are now more open about sharing their personal lives online while at the same time taking offense to their data being shared or used for AI training.”
– Ajoy Singh, COO and Head of AI, Fractal Analytics
Now the question is whether we were told or asked before using our data for AI development, and if informed, how willing or open are we to contributing to the training datasets? Answering this, Mr. Jain says, “None of us were informed that our data or the database we helped build was being used for AI development. It wasn’t explicitly agreed upon.”
He explains that ChatGPT is trained on human-based reinforcement learning and not just machine-based reinforcement learning, which requires access to our data. “Every product works on feedback to improve. If I am told that any data I share would be used for training or improving an AI platform, I would be glad to be a part of it.”, he adds.
Mr. Ajoy Singh, COO and Head of AI at Fractal Analytics, says that ethically, all AI must be trained on publicly available data, not private or personal data. But now that it’s already done the way it is, people at least need to be informed about this. He further explains that it all comes down to seeking permission before accessing or using someone’s private data.
“People are now more open about sharing their personal lives online while at the same time taking offense to their data being shared or used for AI training,” he says. “90% of people are not aware that their commands to all of these AI – Siri, Alexa, Google Assistant, etc. – are being recorded”, he adds. Hence, more than the sharing of personal data, it is the lack of consent that offends people.
That explains people’s outrage when Google came out stating that Gmail users’ data was used [without their consent] to train their conversational AI, Bard. According to Mr. Singh, transparency is the way to go. “Companies have to be transparent about using our data. They should clarify to us what options we have to enable or disable data sharing and what kinds of data they are taking from us.”, he says.
Our privacy is breached when websites store our data without permission, and developers use it to train their models. Therefore, data privacy in the AI era comes down to user consent. People should be clearly asked and given a choice to decide whether or not they wish to share their data.
Ensuring Data Security on AI Platforms
Now that we understand the importance of data security on AI platforms and the potential risks of a data breach, how can we ensure our data is shared safely?
Mr. Jain says that architecturally, the developers would have closed all possible potholes for private data being accessed by someone using AI. Moreover, AI is trained on masked content, sharing only the textual or language data and not on who said what. In other words, AI uses the data to learn language processing and cannot track it back to the individuals who fed it. At this point, he says, it would be surprising to see an AI linking a conversation to a particular individual or entity, or if anybody gains such information from AI.
Currently, AI platforms do have certain measures in place to ensure data security. Firstly, AI tools are built with access controls aimed at limiting access to the data. Regular security audits are also conducted to help identify any potential vulnerabilities in the system. Moreover, encryption techniques are employed to ensure that even if the data is compromised, it cannot be accessed or read without an encryption key.
Mr. Mukherjee says that AI research and development companies must be aware of potential breaches and plan accordingly. More importantly, he says there should be laws and regulations [regarding this] in place, which must be strictly enforced upon the companies.
We need to understand the potential of AI technology and place regulatory frameworks around it to ensure data security and privacy go hand in hand with the pace of AI development. Developers, users, and regulatory bodies must work together to achieve this. More importantly, companies must face the consequences if things are not done right.
AI platforms are still under development, and they improve only through trial, error, and feedback. ”The seatbelts and airbags for generative AI will get developed very soon,“ says Mr. Singh, looking forward to a safer AI era.
How Safe Is AI-based Training for Humans?
“AI technology should not be used to train humans where there is a potential risk to life or where the cost of error is huge.”
– Ajoy Singh, COO and Head of AI, Fractal Analytics
Artificial intelligence is developing at such a fast rate that AI platforms, built and trained by humans, are now capable of teaching and training humans in return. E-learning platforms, like Duolingo and Khan Academy, have already integrated ChatGPT-based bots into their teaching system, and others seem to be following suit. From a time when people feed information into an AI, we are now moving to an age where AI will be used to educate people.
Mr. Jain finds artificially intelligent platforms to be the most patient of tutors. “No matter how long a student takes to grasp a concept, or how many times the same thing needs to be repeated, an AI wouldn’t get emotional or lose patience [unlike human teachers]. The AI would still work on getting the student one step closer to the answer.”, he says. Adding another benefit of AI-based learning, he says it can customize the teaching method depending on the student’s level of understanding.
Now, does that mean, going forward, human teachers will be replaced by AI platforms? Not really. Mr. Jain is certain that the human touch cannot be replaced, and so AI, if at all being used, would only be an excellent assistant to human tutors.
All that being said, he also shares his fears of a person’s weakness and incapabilities being harnessed to come up with a targeted product. “An AI’s knowledge of a student’s shortcomings shouldn’t be used for targeted marketing or product development,” he says. He adds that, luckily, we are still at a point where we can regulate and control these aspects to make AI learning safer for children and students.
Source: wire19
It is indeed a great advancement in AI technology; however, this raises the question of safety again. Knowing that the content generated by AI chatbots like ChatGPT may have factual errors and that they can be trained to give out biased information, how safe is it to use AI tools to train humans?
Mr. Singh believes using AI in reasoning-based education is fairly safe and efficient. However, he suggests that AI technology not be used to train humans where there is a potential risk to life or where the cost of error is huge – for instance, in medical sciences or pilot training.
Regarding the safety of children using educational AI platforms, he says it is important to train such AI to detect unsafe inputs and ensure safe outputs. He adds that children must also be taught what is right and wrong in the digital world and the potential risks of sharing private data on such platforms.
Also Read: Is Your Privacy at Risk? How Fog Data Science Trades Location Data
Intellectual Property Violation on AI Platforms
“With so much AI-generated content out there, we no longer know where to draw the line for plagiarism.”
– Kunal Jain, CEO, Analytics Vidhya
The content generated by AI platforms is, ethically speaking, plagiarism at scale, as they don’t come with source credits or citations. Mr. Jain weighs in with the fact that with so much AI-generated content out there, we no longer know where to draw the line for plagiarism. There are so many duplicates and variations of the same information on the internet today – be it in music, art, text, or images – that it has become difficult to track it back to the original creators.
AI development entities like OpenAI and Midjourney have recently gotten into legal battles for copyright infringement and plagiarism. Creators, artists, and digital media distributors have filed action lawsuits against some AI tools. They are claiming that their artwork was either copied or edited and reproduced by image-generating AI tools without giving them any credit. While some people find this a violation of intellectual property, others see it as inspired works.
Source: creativindie
Mr. Singh shares his view, stating, “If you look at human evolution, nothing is original. Every masterpiece and development has been built upon something that already existed or inspired by something.” So how much of it can we say is copied, and what parts of it are inspired?
Must Read: Navigating Privacy Concerns: The ChatGPT User Chat Titles Leak Explained
Conclusion
Artificial intelligence is developing at its fastest pace today. The data fed into these models during training, testing, and deployment decide how they think and operate. Training an AI on personal data could make it biased to think in a particular way. Hence, it is important to choose the training data carefully. As Mr. Singh says, “They (AI) must be trained to keep away any biases impacting global good or the quality of services.”
Developers must prioritize data security to prevent AI from manipulating individuals and infringing privacy. Although this is an exciting era, caution must be exercised to prevent becoming a pawn in the game of AI. With the ever-expanding capabilities of AI, AI organizations and users share responsibility for safe and ethical data exchange. Let the vision of developing transparent and data-safe AI be realized to its full potential soon.