Q&A: At MIT event, Tom Siebel sees ‘terrifying’ consequences from using AI

Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during MIT Technology Review’s EmTech Digital conference. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was Tom Siebel, CEO C3 AI and founder of CRM vendor Siebel Systems.

Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.

To read this article in full, please click here

Read more

As Europeans strike first to rein in AI, the US follows

A proposed set of rules by the European Union would, among other things. require makers of generative AI tools such as ChatGPT,to publicize any copyrighted material used by the technology platforms to create content of any kind.

A new draft of European Parliament’s legislation, a copy of which was attained by The Wall Street Journal, would allow the original creators of content used by generative AI applications to share in any profits that result.

To read this article in full, please click here

Read more

Tech bigwigs: Hit the brakes on AI rollouts

More than 1,100 technology luminaries, leaders, and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by Future of Life Institute, a nonprofit organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

To read this article in full, please click here

Read more

Samsung shows we need an Apple approach to generative AI

It feels as if practically everyone has been using Open AI’s ChatGPT since the generative AI hit prime time. But many enterprise professionals may be embracing the technology without considering the risk of these large language models (LLMs).

That’s why we need an Apple approach to Generative AI.

What happens at Samsung should stay at Samsung

ChatGPT seems to be a do-everything tool, capable of answering questions, finessing prose, generating suggestions, creating reports, and more. Developers have used the tool to help them write or improve their code and some companies (such as Microsoft) are weaving this machine intelligence into existing products, web browsers, and applications.

To read this article in full, please click here

Read more

Legislation to rein in AI’s use in hiring grows

Organizations are rapidly adopting the use of artificial intelligence (AI) for the discovery, screening, interviewing, and hiring of candidates. It can reduce time and work needed to find job candidates and it can more accurately match applicant skills to a job opening.

But legislators and other lawmakers are concerned that using AI-based tools to discover and vet talent could intrude on job seekers’ privacy and may introduce racial- and gender-based biases already baked into the software.

“We have seen a substantial groundswell over the past two to three years with regard to legislation and regulatory rule-making as it relates to the use of AI in various facets of the workplace,” said Samantha Grant, a partner with the law firm of Reed Smith. 

To read this article in full, please click here

Read more

Tech big wigs: Hit the brakes on AI rollouts

More than 1,100 technology luminaries, leaders and scientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by The Future of Life Institute, a nonprofit organization that aims is to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk, and MIT Future of Life Institute President Max Tegmark joined other signatories in saying AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

To read this article in full, please click here

Read more