- AI Natives
- Posts
- The AI Natives #3 - Shaping the Future: AI Self-Regulation, Evolving Tech, and an AI-Infused Reality
The AI Natives #3 - Shaping the Future: AI Self-Regulation, Evolving Tech, and an AI-Infused Reality
Exploring the realms of AI self-regulation, innovative video tech, and the conundrums of AI-generated content.
Hey there, #AI Natives! š¤
Happy to have you here joining us with the third issue of The AI Natives!
We continue on the Hype/Danger Index on AI, announced last week. Our starting values were 60 for danger - as conversations about malicious use of deepfakes (videos, voice alike) was circulating around the internet. Hype scored 80, which is hard a surprise considering amount of announcements made by big players last week.
This week we push ādangerā score just a bit lower - our headline news will focus on why.
Hype score also goes a bit down, as some cracks regarding quality are starting to be visible in the AI bubble, but we are still in a massive roll-out phase of these new technologies, and we will continue doing so for a while.
The overall score move slightly closer to the center, and shows 67 this time. We see signs of regulation in AI use and industry tries to implement new methods for Responsible AI development. Regulations are in many cases good - but is it always a case with self-regulation? We will see how it will turn out in this case.
Now, lets enjoy the AI Natives issue #3! ā¬
CONVO OVER COFFEE ā
There are two main news for the Convo Over Coffee section:
1. Commitment to AI safeguards from biggest names in the AI field;
2. Image-to-Video and Text-to-Video is coming.
Leading AI companies commit to AI safeguards such as third-party oversight, security testing, addressing societal harms, and more, aiming to manage the promise and risks of AI technology. Companies include , including Amazon, Google, Meta (formerly Facebook), Microsoft, OpenAI, Anthropic, and Inflection. The step is brokered by President Joe Biden's White House. This step more than anything represents the current AI industry's sentiment to responsible AI development - be proactive, or face regulation.
Frontier Model Forum is a manifestation of that sentiment - launched by Anthropic, Google, Microsoft, and OpenAI, it is an industry body aimed at ensuring the safe and responsible development of frontier AI models. The Forum's core objectives include advancing AI safety research, identifying best practices, sharing knowledge with policymakers, and supporting efforts to leverage AI for addressing society's challenges. My personal take is that these companies are trying to mitigate political risk shown by efforts in UK and EU, by creating a more balanced approach to regulation - the one where they have something to say in that matter.
At the same time, Dave Willner, OpenAI's head of trust and safety, has transitioned to an advisory role after a year and a half in the position. His departure comes at a peculiar time given previous news. OpenAI is seeking a replacement, and CTO Mira Murati will manage the team on an interim basis.
AI Videos are coming. Set of recent productions showed by companies like Runaway, or indie creators with the movie trailer called āGenesisā, presents significant improvements in bringing AI Generated videos into great quality. Mixing already proven engines from applications like Midjourney into the picture and then transforming an image into a video will be the new way for many creators and companies to create stunning stories.
Reply