Hiya, folks, welcome to TechCrunch’s regular AI newsletter.
This week, surveys suggest that Gen Z — regularly the subject of mainstream media fascination — has very mixed opinions on AI.
Samsung recently polled over 5,000 Gen Zers across France, Germany, Korea, the U.K. and the U.S. on their views of AI, and tech more generally. Nearly 70% said that they consider AI to be a “go-to” resource for work-related tasks like summarizing documents and meetings and conducting research, as well as non-work-related tasks such as finding inspiration and brainstorming.
Yet, according to a report published earlier in the year by EduBirdie, a professional essay-writing service, more than a third of Gen Zers who use OpenAI’s chatbot platform ChatGPT and other AI tools at work feel guilty about doing so. Respondents expressed concerns that AI could limit their critical thinking skills and hamper their creativity.
We must take both these surveys with a grain of salt, of course. Samsung isn’t exactly an impartial party; it sells and develops many AI-powered products, so it has a vested interest in painting AI in an overall flattering light. Neither is EduBirdie, whose bread-and-butter business competes directly with ChatGPT and other AI writing assistants. It would doubtless prefer folks be wary of AI — especially AI apps that give essay pointers.
But it could be that Gen Z, while loathe to discount or boycott AI entirely (if that were even possible), is more aware of the potential consequences of AI, and tech in general, than previous generations.
In a separate study from the National Society of High School Scholars, an academic honor society, the majority of Gen Zers (55%) said that they think AI will have a more negative than positive effect on society in the next 10 years. Fifty-five percent think AI will have a significant impact on personal privacy — and not in a good way.
And Gen Z’s opinions matter. A report from NielsenIQ projects that Gen Z will soon become the wealthiest generation ever, with their spending potential reaching $12 trillion by 2030 and overtaking baby boomers’ spending by 2029.
With some AI startups spending upward of 50% of their revenue on hosting, compute and software (per data from accounting firm Kruze), every dollar counts, making allaying Gen Z’s fears about AI a wise business move. Whether their fears can be allayed remains to be seen, given AI’s many technical, ethical and legal challenges. But the least companies could do is try. Trying never hurts.
OpenAI signs with Condé: OpenAI has inked a deal with Condé Nast — the publisher of storied outlets such as The New Yorker, Vogue and Wired — to surface stories from its properties in OpenAI’s AI-powered chatbot platform ChatGPT and its search prototype SearchGPT, as well as train its AI on Condé Nast’s content.
AI demand threatens water supplies: The AI boom is fueling the demand for data centers and, in turn, driving up water consumption. Virginia — home to the world’s largest concentration of data centers — water usage jumped by almost two-thirds between 2019 and 2023, from 1.13 billion gallons to 1.85 billion gallons, according to the Financial Times.
Gemini Live and Advanced Voice Mode reviews: Two new AI-powered, voice-focused chat experiences rolled out this month from tech giants: Google’s Gemini Live and OpenAI’s Advanced Voice Mode. Both feature realistic voices and the freedom to interrupt the bot at any point.
Trump reshares Taylor Swift deepfakes: On Sunday, former president Donald Trump posted a collection of memes on Truth Social that made it seem like Taylor Swift and her fans are coming out in support of his candidacy. But my colleague Amanda Silberling writes that, as new legislation takes effect, these images could have deeper implications about the use of AI-generated images in political campaigns.
The great debate over SB 1047: The California bill known as SB 1047, which tries to stop real-world disasters caused by AI before they happen, continues to draw high-profile critics. Most recently, Congresswoman Nancy Pelosi issued a statement laying out her opposition, calling the bill “well-intentioned” but “ill-informed.”
The transformer, proposed by a team of Google researchers back in 2017, has become the dominant generative AI model architecture by far. Transformers underpin OpenAI’s video-generating model Sora, the newest version of Stable Diffusion and Flux. They’re also at the heart of text-generating models like Anthropic’s Claude and Meta’s Llama.
And now Google’s using them to recommend tunes.
In a recent blog post, a team at Google Research, one of Google’s many R&D divisions, details the new(ish) transformer-based system behind YouTube Music recommendations. The system, they say, is designed to take in signals, including the “intention” of a user’s action (e.g., interrupting a track), the “salience” of that action (e.g., the percentage of the track that was played) and other metadata to figure out related tracks they might like.
Google says that the transformer-based recommender led to a “significant” reduction in music skip-rate and an increase in time users spent listening to music. Sounds (no pun intended) like a win for El Goog.
While it isn’t exactly new, OpenAI’s GPT-4o is my pick for model of the week because now it can be fine-tuned on custom data.
On Tuesday, OpenAI publicly launched fine-tuning for GPT-4o, letting developers use proprietary datasets to customize the structure and tone of the model’s responses or get the model to follow “domain-specific” instructions.
Fine-tuning isn’t a panacea, but, as OpenAI writes in a blog post announcing the feature, it can have a big impact on model performance.
Another day, another copyright suit over generative AI, this one involving Anthropic.
A group of authors and journalists this week filed a class-action lawsuit against Anthropic in federal court, alleging that the company committed “large-scale theft” in training its AI chatbot Claude on pirated e-books and articles.
Anthropic has “built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books,” the plaintiffs said in their complaint. “Humans who learn from books buy lawful copies of them, or borrow them from libraries that buy them, providing at least some measure of compensation to authors and creators.”
Most models are trained on data sourced from public websites and datasets around the web. Companies argue that fair use shields their efforts to scrape data indiscriminately and use it for training commercial models. Many copyright holders disagree, however, and they, too, are filing suits aimed at halting the practice.
This latest case against Anthropic accuses it of using The Pile, a collection of datasets that includes a massive library of pirated e-books called Books3. Anthropic recently confirmed to Vox that The Pile was among the datasets in Claude’s training set.
The plaintiffs are requesting an unspecified amount of damages and an order permanently blocking Anthropic from misusing the authors’ works.
Figuring out the model that can fix your low quality pictures? Now, restoring your low…
Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…
Mochi 1, an open-source text-to-video diffusion model has been released by Genmo. Trained with 10…
So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…
Due to the extreme diversity of video content like motion, camera panning, and length, the…
You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…