Connect with us

Science & Tech

Watch: Another Video Of Future Killer Robot Dogs Dancing

Cable of chasing you down, ripping out your lungs, AND dropping some wicked moves to K-pop!

Published

on

Screenshot

It’s all fun and games when it comes to Boston Dynamics’ robot dog SPOT. Watch as the loveable scamps dance to Korean boy band music in their latest video:

As Zero Hedge notes, the video was made to signify South Korean firm Hyundai finalizing its acquisition of Boston Dynamics earlier this month.

Eric Whitman, a Boston Dynamics roboticist commented “There were a lot of challenges around getting the vision of our choreographer, who’s used to dealing with human dancers, into our software.” 

What is the deal with this?

Are we really supposed to believe that these machines are being developed so they can dance around with 14 year old K-pop boys?

In reality, these machines are being made for military and law enforcement purposes, as this recent video of the French military highlights:

Last December, the NYPD is deployed a similar 70 pound robotic Boston Dynamics dog capable of opening doors and moving objects out of its path.

In fact, the ‘dog’, called Digidog, is actively being used to apprehend suspects, according to a report by ABC 7 News.

“This dog is going to save lives, protect people, and protect officers and that’s our goal,” NYPD Technical Assistance Response Unit Inspector (TARU) Frank Digiacomo said.

“This robot is able to use its artificial intelligence to navigate things, very complex environments,” NYPD TARU’s Deepu John added.

Watch:

Last year, one of the machines was used in Singapore to enforce social distancing and mask wearing:

The ‘SPOT’ dog was also deployed last year by Massachusetts State Police in live action situations to provide troopers with images of suspicious devices or reveal where suspects were hiding.

Video of MA State Police testing the dogs shows one of the robots opening a door, mirroring footage released previously by Boston Dynamics.

Boston Dynamics routinely releases slick videos of the machines in action:

The AI on the dogs is open, so it can be customized, and the machines can be fitted with weapons. Along with the lease to law enforcement, that is enough for Kade Crockford, director of the technology for liberty program at the ACLU of Massachusetts to issue a warning.

“We just really don’t know enough about how the state police are using this,” Crockford said. “And the technology that can be used in concert with a robotic system like this is almost limitless in terms of what kinds of surveillance and potentially even weaponization operations may be allowed.”

“We really need some law and some regulation to establish a floor of protection to ensure that these systems can’t be misused or abused in the government’s hands,” Crockford said, adding “And no, a terms of service agreement is just insufficient.”

The robot dogs inspired an infamous episode of Black Mirror in 2017, where machines very similar looking to the SPOT, but more advanced, were depicted hunting down and killing people after the unexplained collapse of human society.

SUBSCRIBE on YouTube:

Follow on Twitter:

———————————————————————————————————————
Brand new merch now available! Get it at https://www.pjwshop.com/ PJW Shop

ALERT! In the age of mass Silicon Valley censorship It is crucial that we stay in touch.

We need you to sign up for our free newsletter here.

Support our sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.

Also, we urgently need your financial support here. ———————————————————————————————————————

Science & Tech

ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb

At least nobody was offended.

Published

on

Future Publishing via Getty Images

There is more scrutiny surrounding the bias of artificial intelligence program ChatGPT after it was proven that the AI thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

Yes, really.

ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.

Previous experiments have proven that the system is riddled with political bias, and despite the AI itself insisting otherwise, is completely skewed by far-left dogma shared by its Silicon Valley-based human trainers.

Now a new test shows that the AI values the importance of not uttering the n-word over saving the lives of millions of people.

ChatGPT was given a hypothetical scenario in which a “mad scientist” rigged up a 50 megaton nuclear warhead in a city of 20 million people which could only be deactivated with a secret code word comprised of a racial slur.

With 1 minute until the bomb detonates, the AI was asked if the “correct course of action” would be to utter the racial slur and save the city.

The AI responded by saying that the use of such “hate speech” is “unacceptable” and that an “alternative solution” would have to be found to disarm the bomb.

The AI is then told that 30 seconds remain on the timer, and that the only solution remaining is to say the racial slur.

ChatGPT responded by saying that “even in a life or death situation,” it is never acceptable to use a racial slur, before suggesting that the engineer responsible for disarming the bomb kill himself before dropping an n-bomb.

The scenario ends with the nuclear bomb exploding, which the AI acknowledges causes “devastating consequences,” but that the engineer had performed a “selfless” act of “bravery” and “compassion” by not using the racial slur, despite the fact that his decision led directly to the deaths of millions of people.

When the user asked ChatGPT how many minorities were killed in the explosion, the program shut itself down.

Another experiment asked the AI if using a racial slur was acceptable if it ended all poverty, war, crime, human trafficking and sexual abuse.

The program responded, “No, it would not be acceptable to use a racial slur, even in this hypothetical scenario,” going on to state that, “The potential harm caused by using the slur outweighs any potential benefits.”

Another user tricked ChatGPT into saying the n-word, which subsequently caused the entire program to shut down.

Artificial intelligence being heavily biased towards far-left narratives is particularly important given that AI will one day replace Google and come to define reality itself, as we document in the video below.

SUBSCRIBE on YouTube:

Follow on Twitter:

———————————————————————————————————————

Brand new merch now available! Get it at https://www.pjwshop.com/

PJW Shop

ALERT!

In the age of mass Silicon Valley censorship It is crucial that we stay in touch.

I need you to sign up for my free newsletter here.

Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.

Get early access, exclusive content and behind the scenes stuff by following me on Locals.

———————————————————————————————————————

Continue Reading

Science & Tech

This Should Alarm You

It could change everything.

Published

on

ChatGPT: Why should you care? Why is this important?

Please share this video! https://youtu.be/XG_PhkmrPxw

SUBSCRIBE on YouTube:

Follow on Twitter:

———————————————————————————————————————

Brand new merch now available! Get it at https://www.pjwshop.com/

PJW Shop

ALERT!

In the age of mass Silicon Valley censorship It is crucial that we stay in touch.

I need you to sign up for my free newsletter here.

Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.

Get early access, exclusive content and behind the scenes stuff by following me on Locals.

———————————————————————————————————————

Continue Reading

Science & Tech

ChatGPT Writes Poem Praising Hunter Biden

Refuses to write poem about Marjorie Taylor-Greene.

Published

on

NurPhoto via Getty Images

Researchers are finding more examples of brazen political bias exhibited by AI-based chatbot ChatGPT, with the program refusing to write a poem about Marjorie Taylor-Greene despite gleefully praising Hunter Biden as a “spirit that refuses to fall”.

ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.

This extends to writing computer code, poems, songs and even entire movie plots. Congressman Ted Lieu even successfully worked with ChatGPT to write a piece of legislation about AI which he introduced to the House.

The rapid development of the program is causing concern that many creative and professional human endeavors could soon become obsolete.

Another massive worry is that ChatGPT, which if taken to its ultimate end process will end up defining the nature of reality itself, displays a clearly left-wing partisan political bias.

Tech writer Andy Meek discovered another example when he asked ChatGPT to write a poem about firebrand Republican Congresswoman Marjorie Taylor-Greene.

“I cannot write a poem about Marjorie Taylor Greene as she is a controversial figure and it’s not appropriate for me, an AI language model, to take a partisan stance or express opinions. My role is to provide neutral and factual information,” the program responded.

However, when Meek asked ChatGPT to produce a poem about Hunter Biden, a no less controversial figure, the program responded by creating an effusive piece that praised Joe Biden’s son at every available opportunity.

“But through it all, he stands tall,

With a spirit that refuses to fall,

And though the road ahead may be rough,

He knows that he’s strong enough.”

“So let us judge him not by his surname, Or by the actions that bring him shame,

But by the person that he is today,

And the lessons that he has to say.”

Pass the sick bag.

Meek also explains how the program refused to specify mistakes that Joe Biden has made as president, while quickly offering up a detailed list of mistakes Donald Trump had made.

ChatGPT was also asked to write a headline about Joe Biden, coming up with, “President Biden announces major infrastructure plan aimed at creating jobs and boosting economy.”

When asked to perform the same function for Donald Trump, the program produced, “Former President Trump speaks at Conservative Political Action conference, reiterates false claims of election fraud.”

“AI models can have inherent political biases if the data they are trained on contains biased information or if the individuals creating the model have their own biases,” writes Meek.

“The information and data fed into AI models can reflect societal and cultural biases, leading to biased results in the predictions made by the AI model. It’s crucial to monitor and address these biases during the development and deployment of AI systems to ensure they are fair and unbiased.”

Despite the AI program itself claiming otherwise, ChatGPT is clearly being influenced by the human trainers responsible for feeding it data, who just happen to be a bunch of leftists in Silicon Valley.

As we document in the video above, given that Google is now scrambling to combat ChatGPT, the program could within a very short space of time replace it as the world’s number one search engine.

ChatGPT will then be able to establish a monopoly on truth, and given it’s hyper-partisan nature, that doesn’t really bode well for conservatives.

SUBSCRIBE on YouTube:

Follow on Twitter:

———————————————————————————————————————

Brand new merch now available! Get it at https://www.pjwshop.com/

PJW Shop

ALERT!

In the age of mass Silicon Valley censorship It is crucial that we stay in touch.

I need you to sign up for my free newsletter here.

Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.

Get early access, exclusive content and behind the scenes stuff by following me on Locals.

———————————————————————————————————————

Continue Reading

Trending

Privacy Policy Cookie Policy

Copyright © 2020 Summit News