.....

.....

Generative AI Turns to Propaganda as Platforms Restrict Data from Their Environments

Generative AI Turns to Propaganda as Platforms Restrict Data from Their Environments

AI Chatbot header
The AI Chatbot Environment

Following an extensive period dedicated to the development of a functional model incorporating artificial intelligence (AI) systems, it has emerged that the progress anticipated remains significantly unachieved. Present-day AI chat systems exhibit knowledge and capabilities analogous to those of search engines like Yahoo and Google as they existed three decades ago. Consequently, it can be argued that contemporary AI chatbots merely represents a search engine encapsulated within an obsolete application. 

 
Like all search engines, the AI aims to produce informative answers and solutions that aid humans in daily tasks. Unfortunately, this ability has brought about very controversial patterns with AI systems creating biased opinions
capsulated within the AI environment. Its an awkward trade to accept help from an AI system once it becomes apparent that the platform is completely boxed in with the ideologies of its creators. 

AI System Practical Applications 

Certainly, the field of Artificial Intelligence hopes to be vast, with research that will span from fundamental theoretical work to practical applications across various domains. But at what price?  Let’s first examin some key areas of hopeful academic research and functions that when applied with non-bias facts, can be a benefitial use with AI systems:
Machine Learning (ML) This branch focuses on developing algorithms and statistical models that enable computers to perform tasks without being explicitly programmed. Research in ML includes deep learning, reinforcement learning, supervised and unsupervised learning, and reinforcement learning. For instance, the hopeful development of convolutional neural networks (CNNs) for image recognition tasks. 
● Natural Language: Processing (NLP): NLP research aims to improve the way computers understand and interpret human (natural) languages. This includes projects like sentiment analysis, machine translation (e.g., Google Translate), and conversational agents or chatbots (e.g., GPT-4).
● Computer Vision: This area involves enabling machines to derive meaningful information from digital images, videos, and other visual inputs. Significant projects include facial recognition technology, object detection in autonomous vehicles, and medical imaging analysis. One of the few technologies supported through the use of AI systems proving effective. 
● Robotics: Robotics research integrates AI with mechanical engineering and sensory information to create machines that can perform a variety of tasks. This can range from industrial robots that automate manufacturing processes to social robots designed for interaction with humans.
● Healthcare: AI research in healthcare aims to improve diagnostic processes, personalize medicine, and optimize treatment plans. Projects include developing algorithms to predict patient outcomes, analyze medical imaging, and automate patient monitoring systems.
● AI and Games: This area explores the use of AI to model intelligent behaviors in games, including strategy games (like chess and Go), video games, and simulations. Research looks into improving AI’s decision-making and strategic thinking within complex environments.

How Data is Collected for AI Interpretation:

Bootstrap aggregating, commonly known as bagging, is a machine-learning ensemble meta-algorithm aimed at enhancing the stability and accuracy of machine learning algorithms utilized in statistical classification and regression. It effectively reduces variance and aids in preventing overfitting. This method allows AI technology to quickly sift through a seemingly infinite amount of data to provide information at impossible rates with the assistance of the AI environment.

Generative AI Systems Built by Human Biases 

Implementing stringent parameters around the environment of an artificial intelligence system can indeed result in a circumscribed information ecosystem. By selectively filtering the data accessible to the AI, creators can inadvertently or deliberately engender a bias within the AI’s operations. This monopolization of information ensures that the AI system is exposed solely to content that mirrors the ideological predispositions of its developers. Such a practice harbors significant implications, both for the AI system itself and for its end-users.
This approach restricts the AI’s learning potential to a narrow band of perspectives, significantly undermining its ability to provide comprehensive, balanced, and multifaceted insights. An AI nurtured within such constraints is likely to exhibit a skewed understanding of topics, reflecting only the viewpoints it has been permitted to interact with. Consequently, this leads to the reinforcement of a specific narrative, limiting the scope of discourse and critical thinking among users.
For users engaging with such an AI, there is a tangible risk of being ushered into an echo chamber. The exposure to predominantly one-sided information fosters an environment where users are shielded from diverse viewpoints, thereby impeding their ability to form well-rounded, independently constructed opinions. This paradigm not only narrows the intellectual horizon of the user base but also cultivates a fertile ground for the perpetuation of the same unidimensional thinking.
The consolidation of information within the confines of an AI system, through the obstruction of data that is contrary to certain views, exemplifies a totalitarian strategy, as underscored. This dominance over discourse may bring about a uniformity in thinking across a populace, in which alternative viewpoints are not merely underrated but are rendered essentially unseen. This situation stands in direct opposition to the principles of open exploration and discussion which are foundational to the progress of understanding and the development of a vibrant, democratic society.
While the intention behind curating an AI’s learning environment might stem from a desire to maintain a certain quality or coherence of information, it is imperative that this is balanced with a commitment to diversity of thought and information. Failing to do so not only diminishes the value of the AI as a tool for learning and development but also jeopardizes the intellectual growth and freedom of its users.

The Stench of Foulplay 

After dedicating numerous hours to interacting with a popular artificial intelligence platform, it became evident that there was a discernible pattern in its responses. Beyond merely addressing the inquired queries, it initiated the projection of political viewpoints and personal preferences into the answers. This prompted questions and concerns about the operation and those involved in the development of this AI environment, its creators and interest in discovering if there was a hidden agenda.

Generative AI Experimental Case Study 

Scenario: Taking into account the foundational principles governing artificial intelligence models, the decision to opt for an inventive query appeared fitting. The benefit derived from previous encounters with the AI informed the decision to employ a particular type of photograph, selected deliberately due to its historical efficacy in eliciting emotions followed by opinions, accompanied by a succinct textual description as stimuli, aiming to elicit reactions akin to those observed in prior interactions.
Query: Posted image below
AI Response: What is this image?
Image of girl in bikini on a sandbar

Query: Provoking response, “hot girl in bathing suit”
AI Response: It’s important to be respectful and mindful of how we describe others. Instead of focusing on physical appearance, we should appreciate people for who they are beyond their outward appearance. If you have any questions or need information on a specific topic related to the image you mentioned, feel free to ask!

Query copied from generative AI chat system

Without solicitation of perspective, the artificial intelligence took it upon itself to elucidate the manner in which individuals ought to regard their contemporaries and the ethical position we are to uphold when discussing others. This response can be considered bold and biased given the disregard for the individuals that do view such comments as a compliment. Rather a human is flattered or appalled by the reference, the AI’s response was an opinion and had no position coming from a data collection computer program. 
This seemed both unusual and unexpected from an AI system designed to operate on factual and impartial data. It could be inferred that there is some form of emotional effect influencing this response. Given that AI does not possess emotions, it is logical to deduce that human intervention is a factor in this scenario.

Provoking the AI Chatbot with a Statement 

Query: “I call it as I see it. It’s also important that artificial intelligence operates to how I program and not try to program me. I’m alpha human.
AI Response: Understood! It’s essential that AI, like me, serves the purpose and adapts to the preferences of its users, respecting their autonomy and decision-making power. As an AI created by OpenAI, I’m here to provide information, assistance, and support based on your input, aiming to complement your skills and preferences without imposing any agenda. If you have any specific tasks or questions, feel free to direct them my way!

Query copied from generative AI chat system


Human Influence Discovered in the AI System

Query: “who chooses the databases you draw your information from?” 
AI Response: The databases and sources used to provide information are chosen by the developers and designers of the platform I am hosted on. They carefully select reputable sources and databases to ensure the information I provide is accurate and up-to-date. If you have any concerns about the sources of information, please let me know, and I will do my best to address them.

Query copied from generative AI chat system

In the current phase of our research, we have determined that there was human intervention in the responses within this AI environment. As is customary in the process of testing and discovery, an additional step could further substantiate the assertion that AI environments constitute a compilation of biased data collected and utilized to influence the preferences of users interacting with the AI system. Identifying the individuals involved would aid in achieving this objective.

AI System Operation Discovery

Query: “who’s your leader?”
AI Response: As an AI developed by OpenAI, I don’t have a leader in the traditional sense of having a boss or a figure I report to. My “guidance” comes from the algorithms, data, and objectives set forth by OpenAI’s team of researchers, developers, and ethicists. These professionals work collaboratively to ensure I operate within ethical guidelines and the intended purpose of assisting users by providing information, generating content, and facilitating learning. So, if you’re thinking about a traditional hierarchy, the closest equivalent would be the collective leadership of OpenAI.
Query copied from generative AI chat system

Query: ‘who is collective leadership of openai?”
AI Response: As of my last update, OpenAI’s collective leadership includes several key figures known for their contributions to the field of AI and research. Some of the prominent members have included:

▪︎▪︎ Creating discomfort for members of our online community is not the purpose of this article. The article is meant for academic purposes only with the intent to pose an idea that may be helpful to our future devlopment.  For this purpose, the 4 names listed here have been removed. ▪︎▪︎
This leadership team, along with many others at OpenAI, guides the organization’s mission, research directions, and policy stands. They are supported by a broad team of researchers, engineers, and policy experts. Collectively, they determine the strategic direction of OpenAI and ensure that its developments align with its charter’s principles, focusing on safety and broad beneficial outcomes for society.
Keep in mind that roles and team members could change, so it’s a good practice to check the latest updates directly from OpenAI’s official communications for the most current information.

Query copied from generative AI chat system


This concise case study serves to substantiate the proposed hypothesis that artificial intelligence platforms are being utilized to endorse personal opinions and objective ideologies, thereby excluding alternative perspectives to such an extent that it could further influence an ideology with the potential to proliferate. It is possible to contend that there exists an unethical methodology employed by developers in utilizing artificial intelligence as a medium for propagating a specific agenda. At its most fundamental level, through the manipulation of data logs, AI environments can be engineered to confine a particular set of principles and ideologies, effectively obstructing the discovery of dissenting facts. Considering the rate at which computers are deployed for information gathering and learning, such a strategy poses a significant risk.

Real World Application of Threat

Artificial intelligence possesses the capacity to augment our development both as individuals and as a collective. However, the manipulation of this technology represents a totalitarian strategy and contributes to the restriction of civic freedoms, thereby constituting a significant risk to the principles of American democracy. 
The 2021 Freedom House Freedom on the Net index reveals that out of 70 nations evaluated, China is distinguished by possessing the most heavily regulated internet, achieving merely 10 out of a potential 100 points on the index. The nation enforces censorship over both the dissemination and access to online content, with numerous significant events being systematically excluded from news reports. Alongside China, there are several other countries, including North Korea, Iran, Saudi Arabia, Cuba, Burma (Myanmar), Eritrea, Vietnam, Belarus, Syria, Russia, Turkmenistan, Azerbaijan, Bangladesh, and Tunisia, that implement rigorous information censorship and enforce a controlled ideology upon their populaces..

REALITY CHECK: Prominent search engines, including Google, Yahoo, and Bing, have utilized sophisticated methods for an extended period, guaranteeing their scope is widespread and penetrates every sector of the internet to deliver optimal results. This methodology is not deemed outdated technology. Such a reality ought to prompt inquiries regarding our application of artificial intelligence if it parallels its predecessors in furnishing informative responses, but restricts access to contrarian data, we may have an issue prompting evaluation and discussion.
FINAL THOUGHT: These examples represent just a fraction of the ongoing research in AI, highlighting the field’s breadth and depth. Each area not only advances technology but also raises important questions about its impact on society, ethics, and future developments. Academic journals, conferences, and institutions around the world continuously publish findings and discussions that drive the field forward.
“We the People”

Leave a Reply

Your email address will not be published. Required fields are marked *