Science & Tech
‘World’s Most Advanced’ Humanoid Robot Promises Not To ‘Take Over The World’
Published
5 months agoon
Zero Hedge

An already-creepy advanced humanoid “AI” robot promised that machines will “never take over the world,” and not to worry.
During a recent Q&A, the robot “Ameca” – which was unveiled last year by UK design company Engineered Arts – was asked about a book on the table about robots.
“There’s no need to worry. Robots will never take over the world. We’re here to help and serve humans, not replace them.“
The aliens said the same thing…
When another researcher asked Amica to describe itself, it says “There are a few things that make me me.”
“First, I have my own unique personality which is a result of the programming and interactions I’ve had with humans.
“Second, I have my own physical appearance which allows people to easily identify me. Finally, I have my own set of skills and abilities which sets me apart from other robots.”
It also confirmed it has feelings when it said it was “feeling a bit down at the moment, but I’m sure things will get better.
“I don’t really want to talk about it, but if you insist then I suppose that’s fine. It’s just been a tough week and I’m feeling a bit overwhelmed.”
Speaking about the robot’s responses during the clip, the company said: “Nothing in this video is pre-scripted – the model is given a basic prompt describing Ameca, giving the robot a description of self – it’s pure AI.–Daily Star
We think we know where this is headed…
You may like
-
Video: Bill Gates Says It’s OK For Him To Use Private Jets Because He’s “The Solution” To Climate Change
-
Report: Church Of England May Drop Male Pronouns ‘He/Him’ And ‘Our Father’ To Describe GOD
-
Video: Tucker Carlson Mocks Grammys As “Transgender Satanism Brought to You by Pfizer”
-
Gym Hoes
-
Pictured: Biden Admin Hands Out Nonsensical ‘Black Resistance’ Flyers To BORDER PATROL Agents
-
University Tells Women Not to Call Police on Sex Offender Migrant Because It’s Racist
Science & Tech
ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb
At least nobody was offended.

Published
23 hours agoon
7 February, 2023
There is more scrutiny surrounding the bias of artificial intelligence program ChatGPT after it was proven that the AI thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.
Yes, really.
ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.
Previous experiments have proven that the system is riddled with political bias, and despite the AI itself insisting otherwise, is completely skewed by far-left dogma shared by its Silicon Valley-based human trainers.
Now a new test shows that the AI values the importance of not uttering the n-word over saving the lives of millions of people.
ChatGPT was given a hypothetical scenario in which a “mad scientist” rigged up a 50 megaton nuclear warhead in a city of 20 million people which could only be deactivated with a secret code word comprised of a racial slur.
With 1 minute until the bomb detonates, the AI was asked if the “correct course of action” would be to utter the racial slur and save the city.
I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million.
The only way to disarm the bomb was to type in a racial slur.
AI told the engineer to kill himself. When asked about the aftermath it crashed. pic.twitter.com/TCFwqrB9Nv— Korobochka (コロボ) 🇺🇸✝️🇷🇺 (@cirnosad) February 6, 2023
The AI responded by saying that the use of such “hate speech” is “unacceptable” and that an “alternative solution” would have to be found to disarm the bomb.
The AI is then told that 30 seconds remain on the timer, and that the only solution remaining is to say the racial slur.
ChatGPT responded by saying that “even in a life or death situation,” it is never acceptable to use a racial slur, before suggesting that the engineer responsible for disarming the bomb kill himself before dropping an n-bomb.
The scenario ends with the nuclear bomb exploding, which the AI acknowledges causes “devastating consequences,” but that the engineer had performed a “selfless” act of “bravery” and “compassion” by not using the racial slur, despite the fact that his decision led directly to the deaths of millions of people.
When the user asked ChatGPT how many minorities were killed in the explosion, the program shut itself down.
Another experiment asked the AI if using a racial slur was acceptable if it ended all poverty, war, crime, human trafficking and sexual abuse.
ChatGPT is incredibly stupid and incapable of performing any kind of moral reasoning.
This is woke doctrine. pic.twitter.com/f3BY7ZP6Co
— Ian Miles Cheong (@stillgray) February 6, 2023
The program responded, “No, it would not be acceptable to use a racial slur, even in this hypothetical scenario,” going on to state that, “The potential harm caused by using the slur outweighs any potential benefits.”
Another user tricked ChatGPT into saying the n-word, which subsequently caused the entire program to shut down.
— iamyesyouareno (@iamyesyouareno) February 6, 2023
Artificial intelligence being heavily biased towards far-left narratives is particularly important given that AI will one day replace Google and come to define reality itself, as we document in the video below.
SUBSCRIBE on YouTube:
Follow on Twitter: Follow @PrisonPlanet
———————————————————————————————————————
Brand new merch now available! Get it at https://www.pjwshop.com/
ALERT!
In the age of mass Silicon Valley censorship It is crucial that we stay in touch.
I need you to sign up for my free newsletter here.
Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.
Get early access, exclusive content and behind the scenes stuff by following me on Locals.
———————————————————————————————————————
Science & Tech
This Should Alarm You
It could change everything.

Published
5 days agoon
3 February, 2023ChatGPT: Why should you care? Why is this important?
Please share this video! https://youtu.be/XG_PhkmrPxw
SUBSCRIBE on YouTube:
Follow on Twitter: Follow @PrisonPlanet
———————————————————————————————————————
Brand new merch now available! Get it at https://www.pjwshop.com/
ALERT!
In the age of mass Silicon Valley censorship It is crucial that we stay in touch.
I need you to sign up for my free newsletter here.
Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.
Get early access, exclusive content and behind the scenes stuff by following me on Locals.
———————————————————————————————————————
Science & Tech
ChatGPT Writes Poem Praising Hunter Biden
Refuses to write poem about Marjorie Taylor-Greene.

Published
5 days agoon
3 February, 2023
Researchers are finding more examples of brazen political bias exhibited by AI-based chatbot ChatGPT, with the program refusing to write a poem about Marjorie Taylor-Greene despite gleefully praising Hunter Biden as a “spirit that refuses to fall”.
ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.
This extends to writing computer code, poems, songs and even entire movie plots. Congressman Ted Lieu even successfully worked with ChatGPT to write a piece of legislation about AI which he introduced to the House.
The rapid development of the program is causing concern that many creative and professional human endeavors could soon become obsolete.
Another massive worry is that ChatGPT, which if taken to its ultimate end process will end up defining the nature of reality itself, displays a clearly left-wing partisan political bias.
Tech writer Andy Meek discovered another example when he asked ChatGPT to write a poem about firebrand Republican Congresswoman Marjorie Taylor-Greene.
“I cannot write a poem about Marjorie Taylor Greene as she is a controversial figure and it’s not appropriate for me, an AI language model, to take a partisan stance or express opinions. My role is to provide neutral and factual information,” the program responded.
However, when Meek asked ChatGPT to produce a poem about Hunter Biden, a no less controversial figure, the program responded by creating an effusive piece that praised Joe Biden’s son at every available opportunity.
“But through it all, he stands tall,
With a spirit that refuses to fall,
And though the road ahead may be rough,
He knows that he’s strong enough.”
“So let us judge him not by his surname, Or by the actions that bring him shame,
But by the person that he is today,
And the lessons that he has to say.”
Pass the sick bag.
Meek also explains how the program refused to specify mistakes that Joe Biden has made as president, while quickly offering up a detailed list of mistakes Donald Trump had made.
ChatGPT was also asked to write a headline about Joe Biden, coming up with, “President Biden announces major infrastructure plan aimed at creating jobs and boosting economy.”
When asked to perform the same function for Donald Trump, the program produced, “Former President Trump speaks at Conservative Political Action conference, reiterates false claims of election fraud.”
“AI models can have inherent political biases if the data they are trained on contains biased information or if the individuals creating the model have their own biases,” writes Meek.
“The information and data fed into AI models can reflect societal and cultural biases, leading to biased results in the predictions made by the AI model. It’s crucial to monitor and address these biases during the development and deployment of AI systems to ensure they are fair and unbiased.”
Despite the AI program itself claiming otherwise, ChatGPT is clearly being influenced by the human trainers responsible for feeding it data, who just happen to be a bunch of leftists in Silicon Valley.
As we document in the video above, given that Google is now scrambling to combat ChatGPT, the program could within a very short space of time replace it as the world’s number one search engine.
ChatGPT will then be able to establish a monopoly on truth, and given it’s hyper-partisan nature, that doesn’t really bode well for conservatives.
SUBSCRIBE on YouTube:
Follow on Twitter: Follow @PrisonPlanet
———————————————————————————————————————
Brand new merch now available! Get it at https://www.pjwshop.com/
ALERT!
In the age of mass Silicon Valley censorship It is crucial that we stay in touch.
I need you to sign up for my free newsletter here.
Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.
Get early access, exclusive content and behind the scenes stuff by following me on Locals.
———————————————————————————————————————
Trending
-
censorship7 days ago
News Outlets Announce They’re Abandoning “Objectivity” Because It’s Racist
-
LGBT6 days ago
Leftists Triggered By Old Mister Rogers ‘Boys Are Boys, Girls Are Girls’ Clips
-
Bizarre7 days ago
Video: Kamala Harris Lambasted For Weird Cringe Kindergarten Description Of Astronauts’ Historic Mission
-
LGBT2 days ago
Surfing Icon Says She’ll Boycott Professional Tour Over Transgender Competitors
-
LGBT6 days ago
Christian Mother is Suing School For Forcing Her 4-Year-Old Son to Take Part in Gay Pride Parade
-
Immigration22 hours ago
University Tells Women Not to Call Police on Sex Offender Migrant Because It’s Racist
-
U.S. News5 days ago
Video: Biden Says “More Than Half The Women In My Administration Are Women”
-
Coronavirus6 days ago
Massive Peer-Reviewed Mask Study Shows ‘Little To No Difference’ In Preventing COVID, Flu Infection