Mashable

Subscribe to Mashable feed
Mashable is a leading source for news, information & resources for the Connected Generation. Mashable reports on the importance of digital innovation and how it empowers and inspires people around the world. Mashable's 25 million monthly unique visitors and 10 million social media followers have become one of the most engaged online news communities. Founded in 2005, Mashable is headquartered in New York City with an office in San Francisco.
Updated: 2 hours 1 min ago

NYT Connections today: See hints and answers for May 15

Tue, 05/14/2024 - 21:00

Connections is the latest New York Times word game that's captured the public's attention. The game is all about finding the "common threads between words." And just like Wordle, Connections resets after midnight and each new set of words gets trickier and trickier—so we've served up some hints and tips to get you over the hurdle.

If you just want to be told today's puzzle, you can jump to the end of this article for May 14's Connections solution. But if you'd rather solve it yourself, keep reading for some clues, tips, and strategies to assist you.

What is Connections?

The NYT's latest daily word game has become a social media hit. The Times credits associate puzzle editor Wyna Liu with helping to create the new word game and bringing it to the publications' Games section. Connections can be played on both web browsers and mobile devices and require players to group four words that share something in common.

Tweet may have been deleted

Each puzzle features 16 words and each grouping of words is split into four categories. These sets could comprise of anything from book titles, software, country names, etc. Even though multiple words will seem like they fit together, there's only one correct answer. If a player gets all four words in a set correct, those words are removed from the board. Guess wrong and it counts as a mistake—players get up to four mistakes until the game ends.

Tweet may have been deleted

Players can also rearrange and shuffle the board to make spotting connections easier. Additionally, each group is color-coded with yellow being the easiest, followed by green, blue, and purple. Like Wordle, you can share the results with your friends on social media.

Here's a hint for today's Connections categories

Want a hit about the categories without being told the categories? Then give these a try:

  • Yellow: Creating a law

  • Green: Large swaths of green land

  • Blue: Communicative

  • Purple: Misspelled car manufacturers

Featured Video For You Connections: How to play and how to win Here are today's Connections categories

Need a little extra help? Today's connections fall into the following categories:

  • Yellow: Bit of Legislation

  • Green: Grassland

  • Blue: Forthright

  • Purple: Car Companies Minus Letter

Looking for Wordle today? Here's the answer to today's Wordle.

Ready for the answers? This is your last chance to turn back and solve today's puzzle before we reveal the solutions.

Drumroll, please!

The solution to Connections #337 is...

What is the answer to Connections today
  • Bit of Legislation: ACT, BILL, MEASURE, RESOLUTION

  • Grassland: MEADOW, PLAIN, PRAIRE, SAVANNA

  • Forthright: DIRECT, FRANK, OPEN, STRAIGHT

  • Car Companies Minus Letter AURA, BUCK, DOGE, HODA

Don't feel down if you didn't manage to guess it this time. There will be new Connections for you to stretch your brain with tomorrow, and we'll be back again to guide you with more helpful hints.

Is this not the Connections game you were looking for? Here are the hints and answers to yesterday's Connections.

Google injects still more AI into Google Docs and other Workspace apps

Tue, 05/14/2024 - 20:30

On Tuesday at Google I/O, Google's much relied-upon — but rarely loved — Google Workspace software suite got a major injection of additional AI features that are coming soon.

SEE ALSO: Everything announced at Google I/O, including AI Agents, Ask Photos, and more

Gemini 1.5 Pro, from the language model family formerly known as Bard, is being plastered into the side panel in Google Docs, Sheets, and Slides — not to mention Drive and Gmail. These applications are already interconnected, but this slate of features aims to automate those connections via a chirpy AI-powered assistant with the power to — in theory — teleport from app to app, doing work tasks that used to be labor-intensive. 

Google is clearly envisioning a more seamless and integrated experience across Workspace, enabled by the centralization of all the user's documents and data. With Gemini functionality perpetually available on the screen, users are being encouraged to ask the bot quotidian questions or request little favors. While in Docs, Gemini can dig up details found in emails, or organize lists into spreadsheets automatically. 

Users also aren't required to specify exactly which applications they expect Gemini to use to perform the functions in question. In the demo, a user simply asks the AI assistant to help them organize, and it invents a system in which it will place files in a new folder, and organize the data from said files into a spreadsheet. 

Credit: Mashable screenshot from Google's presentation

If you're excited by the prospect of an AI-assisted workflow, it's worth pausing for a moment to consider data security. Last year, a New York Times report notes, there was a great deal of internal discussion at Google when the company attempted to rework its privacy agreement to begin mining users' publicly available Google Docs for AI training data. Google can now use such data according to its user agreement, but only chooses to incorporate data from users who opt into experimental Google features, the Times reported. 

It's also worth noting that we've only seen a demo so far. AI assistants have, thus far, been buggy, lying robots, seemingly rushed to the market way too quickly. With OpenAI nipping at Google's heels, Google's new AI-enabled glow-up for Workspace can't just be on trend. As the name implies, it has to work.

Gmail gets big Gemini update: 3 new AI features, including 'CliffsNotes' for your inbox

Tue, 05/14/2024 - 16:59

At Google I/O, the search-engine tech giant boasted that Gmail is poised to get new AI capabilities via Gemini. With Gemini underpinning Gmail, a new field will appear, allowing you to ask the AI chatbot to summarize certain emails in your inbox.

Summarization isn't the only thing rolling out to Gmail. Here are three features Google has announced for its popular inbox app.

SEE ALSO: Everything announced at Google I/O, including AI Agents, Ask Photos, and more 1. Gmail can summarize your emails for you

As mentioned, Google is sprinkling some more Gemini magic into Gmail. Here's an example Google provided to illustrate Gmail's new capabilities: Imagine you have a child in elementary school. Perhaps you have a number of emails from said school, but you still need to catch up. Instead of wasting time and parsing through those emails, you can now use Gemini as a "CliffsNotes" tool, allowing you to summarize those emails.

Credit: Google

If you use a prompt like, "Catch me up on emails from Maywood Park Elementary School," Gemini will give you a rundown of everything you missed — and you don't need to open a single email.

2. Get Google Meet highlights

Let's say you miss an hour-long Google Meet, and you don't have the time to sit through the entire recording that was sent to your email.

Credit: Google

Using the panel on the right, you can ask Gemini to tell you the main points of the meeting, allowing you to get up to speed with your team.

3. Ask Gemini questions about information in your emails

While showcasing Gemini's capabilities on mobile devices, Google boasted that the AI model can find information for you, even if it's buried deep inside your inbox.

Credit: Google

For example, in the Google I/O presentation, a demo showed a woman asking, "When are my shoes arriving?" and "What times do the doors open for the Knicks game?" You don't need to scramble to find the right emails that give you the answers to these questions. Instead, Gemini will sift through your emails and hand you the answers without much effort on your part.

Google says that the summarization features are rolling out this month, while the Q&A capabilities will be available in July.

Google's accessible hands-free cursor is coming to Android

Tue, 05/14/2024 - 16:57

Android users will soon have access to a revolutionarily accessible form of device control: Completely hands-free navigation, using the power of Google AI and facial tracking.

Part of the company's long list of updates and announcements rolling out at its developer keynote event today, the new feature is a mobile version of Google's desktop offering known as Project Gameface. The new virtual cursor uses Android accessibility services and a database of facial expressions from MediaPipe’s Face Landmarks Detection API to allow for broader customization and manipulation of hands-free tech for both users and developers.

SEE ALSO: Everything announced at Google I/O, including AI Agents, Ask Photos, and more

"Through the device's camera, it seamlessly tracks facial expressions and head movements, translating them into intuitive and personalized control. Developers can now build applications where their users can configure their experience by customizing facial expressions, gesture sizes, cursor speed, and more," Google explained in its announcement.

Google partnered with international accessibility solutions group Incluzza to test the Gameface expansion in broader, non-gaming contexts, including work and social tasks.

Project Gameface first launched in 2023 as an open-source, hands-free gaming mouse allowing users to operate computer cursors with just head and facial movements. The technology was designed in collaboration with viral video game streamer Lance Carr, who is quadriplegic, as a more accessible option to expensive head-tracking systems. It also introduced the option of gesture size, allowing more customizability for users of different mobility.

"We’ve been delighted to see companies like playAbility utilize Project Gameface building blocks, like MediaPipe Blendshapes, in their inclusive software," Google wrote. "Now, we’re open sourcing more code for Project Gameface to help developers build Android applications to make every Android device more accessible."

Google also announced new AI features for its screenreader technology, TalkBack, which will provide more detailed descriptions and fill in information for unlabeled images on the web to help users who are blind or have low vision.

Project Gameface is now available for developers on GitHub.

7 of the most exciting moments from the 'House of the Dragon' Season 2 teaser

Tue, 05/14/2024 - 16:23

House of the Dragon Season 2's official trailer is here, and there are so, so many moments to freak out about.

Like the dueling trailers before it, this new House of the Dragon trailer draws the battle lines between Rhaenyra Targaryen (Emma D'Arcy) and her half-brother Aegon II Targaryen (Tom Glynn-Carney), each vying for the Iron Throne. It also delivers a whopping helping of dragon action, Targaryen family drama, and foreshadowing for fans of George R.R. Martin's Fire & Blood. From devastating battles to our return to the North, here are the 7 most exciting moments from the House of the Dragon Season 2 trailer.

SEE ALSO: 'House of the Dragon' Season 2 trailer: Westeros prepares for war "We're going to King's Landing." Is he going to take a seat? Credit: Screenshot: HBO

The trailer opens with Daemon Targaryen (Matt Smith) striding into the throne room in the Red Keep and staring down the Iron Throne. Could this be the aftermath of some great battle? A dream sequence? Or just some day when the Red Keep security was particularly lax? Either way, as he tells Rhaenys (Eve Best) later in the trailer, he certainly plans on going to King's Landing. In other words, Team Black is taking the fight to Team Green. Let's kick the Dance of the Dragons up a notch!

Rhaenyra has a sword! Queen behavior. Credit: Screenshot: HBO

Much of the trailer sees Rhaenyra pondering her decision to go to war and wondering whether she has the support of those around her, but this moment of sword wielding is purely decisive badassery. Hopefully we'll get to see her put the sword to use in battle, maybe even from dragonback? Speaking of...

SEE ALSO: Everything we know about 'House of the Dragon' Season 2 Expect even more dragons in House of the Dragon Season 2 — and more dragon battles. Aegon and Sunfyre mean business. Credit: Screenshot: HBO

The trailer for House of the Dragon is overflowing with dragons, including Rhaenyra and her dragon Syrax, and Aegon on his dragon Sunfyre, seen above, looking as if they're up to no good.

Daemon's dynamic dragon duo. Credit: Screenshot: HBO

Daemon and his dragon Caraxes make an appearance as well, landing on a massive stone structure in the middle of a rainstorm. At the end of Season 1, Daemon did suggest taking the castle of Harrenhal in the Riverlands — perhaps this is him making good on his promise, in the most threatening way possible.

SEE ALSO: 'House of the Dragon' recap: Every death, ranked by gruesomeness Baela and Moondancer on the warpath. Credit: Screenshot: HBO

Daemon's daughter Baela (Bethany Antonia) takes to the skies on her dragon Moondancer in the trailer as well. From the looks of it, she's chasing down Ser Criston Cole (Fabien Frankel) during a large battle. Whatever he did, I'm sure he deserves being dragon prey.

Rhaenys and Meleys always know how to make an entrance. Credit: Screenshot: HBO

Remember when Rhaenys crashed Aegon's coronation with her dragon Meleys? Looks like we're getting more of these two queens in Season 2.

If you're ever this close to Vhagar, it's simply too late for you. Credit: Screenshot: HBO

And, of course, it wouldn't be a House of the Dragon dragon party without Vhagar, Aemond's (Ewan Mitchell) ancient dragon. Here's hoping she commits less child murder this season — although I'm sure quite a bit of adult murder is on the table based on how much fire she's spewing here.

House of the Dragon Season 2 takes us back to Winterfell — and the Wall. "Night gathers, and now my watch begins." Credit: Screenshot: HBO

Rhaenyra's eldest son, Jacaerys (Harry Collett), headed north at the end of Season 1 to secure allies, so you know what that means: It's Stark time!

This season, Jacaerys will meet up with Cregan Stark (Tom Taylor), the current Lord of Winterfell. Judging by the trailer, the two will go even further north, ending up at the Wall where dear old Jon Snow spent most of his days. It feels just like coming home (and also getting frostbite).

The Battle of Rook's Rest is coming. My guess? We're looking at Rook's Rest. Credit: Screenshot: HBO

The trailer for House of the Dragon Season 2 is full of promises of big battles, from armies mobilizing to dragons wheeling around in the sky. One image that keeps popping up in these snapshots is the castle above, which we also see Rhaenys and Meleys flying toward. The importance of this location makes me think this is Rook's Rest, the seat of House Staunton and the site of a major battle in Fire & Blood. No spoilers, but book readers know: This is a big one.

Riots in King's Landing threaten Alicent and Helaena's lives. Alicent Hightower needs to hightail it out of here. Credit: Screenshot: HBO

All the way back in Season 2 of Game of Thrones, riots in King's Landing broke up a procession including figures like Joffrey Baratheon, Cersei Lannister, and Sansa Stark. It looks like something similar will happen on Season 2 of House of the Dragon, with Alicent Hightower (Olivia Cooke) and her daughter, Helaena (Phia Saban), getting targeted by a wrathful mob. What led them to the streets of King's Landing, and, more importantly, what angered their subjects to the point of riots?

SEE ALSO: Sauron slays in 'The Lord of the Rings: The Rings of Power' Season 2 trailer Underhand dealings promise bloody, cheesy terror. Blood? Cheese? Is that you? Credit: Screenshot: HBO

At one point in the trailer, spymaster Mysaria (Sonoya Mizuno) tells someone that there is "more than one way to fight a war." Next, we see shots of shady figures making their way through a building by torchlight. These, coupled with Mysaria's words and a quick flash of coins exchanging hands, hint at one of the darkest moments of Fire & Blood: the arrival of two characters known simply as Blood and Cheese. Let's just say that this is Team Black's revenge for Lucerys' (Elliot Grihault) death, and it is understandably brutal.

House of the Dragon Season 2 premieres June 16 on HBO and Max.

How Google's LearnLM plans to supercharge education for students and teachers

Tue, 05/14/2024 - 16:22

At Google's annual I/O Developer's Conference, the company went all-in on AI, including addressing how it plans to apply the power of artificial intelligence to improve education and learning. "What if everyone everywhere could have their own personal AI tutor on any topic," Google SVP James Manyika asked the conference's keynote audience. "Or what if every educator could have their own assistant in the classroom?"

The answer is LearnLM, a family of language models grounded in educational research and made with educators in mind. Here are three ways the model might deliver more "personalized learning experiences" for students and teachers alike.

A new 'Learning Coach' assistant

Earlier in the morning, Google announced that users would be able to create their own customized AI assistant called a "gem." With their introduction of LearnLM, the company added that pre-made gems especially made for learning would be rolled out in the future. One of these, a "Learning Coach" gem, will provide step-by-step study guidance," and "practice techniques designed to build understanding, rather than just give you the answer." One example provided during the presentation was a Learning Coach gem generating an mnemonic to help a hypothetical student better remember the formula for photosynthesis.

Asked. Credit: Google Answered (sort of) by LearnLM. Credit: Google Interactive learning on YouTube

On YouTube, a LearnLM assistant will respond to viewer questions under educational videos or generate a quiz for the viewer based on the video's information. This feature is already available to select Android users as Google partners with Columbia Teachers College, Arizona State University, and Khan Academy to test and improve its performance. No word on when this YouTube-specific learning assistant will be available to the more than 2 billion people around the world who use the platform every month.

AI tools for educators

Google also noted it's working directly with educators to apply LearnLM's efficiencies to Google Classroom, where teachers will eventually be able to simplify and improve lesson planning, or tailor lessons to the individual needs of their students. Google is also collaborating with MIT's Responsible AI for Social Empowerment and Education Initiative (RAISE) to develop an online course for educators that will help them better understand and leverage generative AI in the classroom.

Featured Video For You Here’s everything that was announced at Google I/O.

Google I/O: Google announces new safety framework for responsible AI

Tue, 05/14/2024 - 15:39

As Google positions its upgraded generative AI as teacher, assistant, and recommendation guru, the company is also trying to turn its models into a bad actor's worst enemy.

"It's clear that AI is already helping people," said James Manyika, Google's senior vice president of research, technology, and society, to the crowd at the company's Google I/O 2024 conference. "Yet, as with any emerging technology, there are still risks, and new questions will arise as AI advances and its uses evolve."

Manyika then announced the company's latest evolution of red teaming, an industry standard testing process to find vulnerabilities in generative AI. Google's new "AI-assisted red teaming" trains multiple AI agents to compete with each other to find potential threats. These trained models can then more accurately pinpoint what Google calls "adversarial prompting" and limit problematic outputs.

SEE ALSO: Gemini Nano can detect scam calls for you SEE ALSO: Google I/O: New Gemini App wants to be the AI assistant to top all AI assistants

The process is the company's new plan for building a more responsible, humanlike AI, but its also being sold as a way to address growing concerns about cyber security and misinformation.

The new safety measures incorporate feedback from a team of experts across tech, academia, and civil society, Google explained, as well as its seven principles of AI development: Being socially beneficial, avoiding bias, building and testing for safety, human accountability, privacy design, upholding scientific excellence, and public accessibility. Through these new testing efforts, and industry-wide commitments, Google's attempting to putting product where its words are.

Featured Video For You Here’s everything that was announced at Google I/O.

Everything announced at Google I/O, including AI Agents, Ask Photos, and more

Tue, 05/14/2024 - 15:16

Google held its I/O event aimed at developers on Tuesday. The event was expected to provide lots of news and announcements, and it did not disappoint.

Nothing is ever certain, but coming into Tuesday's event, it was expected that Google would announce significant updates to its chatbot Gemini. In actuality, we got a mess of AI announcements, most centered on Gemini and its new capabilities.

Here is everything announced during I/O, as well as some of Mashable's corresponding coverage so that you can dive deeper.

DJ mode for Google's Music FX

OK, so this was actually announced in March, but it got a sick demonstration on Tuesday, courtesy of DJ Marc Rebillet, a.k.a. Loop Daddy.

Tweet may have been deleted AI Overviews

Google's first major announcement of I/O was that it would add AI overviews to search. The hope is that AI can take numerous sources of information and make a small, digestible overview for users. Mashable's testing, however, has found the tool unreliable so far.

SEE ALSO: Here's what Google's AI-powered search looks like Ask Photos

Google announced at I/O that Google Photos would get a powerful new AI tool called Ask Photos. Google said the feature can effectively parse through your pictures and answer questions you might have, like " What is my license plate number?" or "When did my kid learn to swim?"

Basically, it seems like it could prove to be an advanced search feature.

Credit: Screenshot: Google 'AI Agents' a.k.a. AI personal assistants

Google debuted AI agents at its I/O event on Tuesday. CEO Sundar Pichai said the AI Agents are in the "early days" but that the idea is that the feature will be able to complete complex tasks for you via AI. An example given by Pichai was returning a pair of shoes—the AI Agent could go through your emails, fill out a return form, and set up a pick-up appointment to return the shoes.

It's unclear when the Agents will be available to the public.

Project Astra

Google announced a new AI Agent — or, at least, it played an apparent demo of it — that is a multimodal tool, meaning you can point it at IRL things and get answers. Google dubbed the tool Project Astra.

Examples in the demo included recognizing the details in code, determining a person's neighborhood, and determining where a misplaced item was last seen. No firm date was announced, however, for users to access such features.

Tweet may have been deleted Gemini 1.5 Pro and Gemini 1.5 Flash

Google announced Gemini 1.5 Pro and Flash at the I/O event. Both are new versions of Google's AI model. Pro will help support many of the new features demonstrated at I/O. Gemini 1.5 Flash is, well, basically Pro, but faster.

"Flash is a lighter-weight model compared to Pro," said Demis Hassabis, CEO of Google DeepMind. "It's designed to be fast and cost-efficient to serve at scale while still featuring multi-model reasoning capabilities and breakthrough long context."

SEE ALSO: Google's AI model just got faster with Gemini 1.5 Flash New search functions

Google's I/O event brought some news for a function that is basically synonymous with the company's name. Google said it would soon have features like video searching, planning via search (like making travel itineraries), and contextual search.

Mashable's Chance Townsend has more details.

AI Teammate

Have you ever wanted an AI co-worker? No? Well, it's too bad; you might be getting one. Google announced a new feature called AI Teammate at I/O, and basically, it's an AI chatbot that'll function as a mock co-worker. It can serve as a hub for all the details co-workers have shared while getting their jobs done. So now, whether you want it or not, you might have a chatbot helping you finish your work.

Tweet may have been deleted Gemini Nano can detect scam calls

Scam calls stink. At Google I/O, the company announced Nano, its smallest AI model that can run entirely on a device. A key feature? It can intercept a spam call, which means AI would be listening to your phone calls.

Mashable's Stan Schroeder has more details.

A new Gemini App

Google announced a new Gemini app at I/O, which is an AI assistant. The app will integrate text, video, and voice prompts. It'll also feature "Gems," which are customizable personal assistants for specific activities like cooking or exercise.

Veo, Google's answer to Sora

Google debuted Veo, a video generator similar to OpenAI's Sora. Hassabis, the Google Deepmind CEO, promised that "Veo creates high-quality 1080p videos from text images and video prompts." That, well, sounds a lot like Sora.

SEE ALSO: Veo, Google's Sora competitor, is hyped by Donald Glover

This story is a developing story and will be updated...

Veo, Google's Sora competitor, is hyped by Donald Glover

Tue, 05/14/2024 - 15:05

Google, with the help of creative renaissance man Donald Glover, has demoed an AI video generator to compete with OpenAI's Sora. The model is called Veo, and while no clear launch date or rollout plan has been announced, the demo does appear to show a Sora-like product, apparently capable of generating high-quality, convincing video.

SEE ALSO: Google I/O 2024: What to expect after OpenAI's GPT-4o reveal

What's "cool" about VEO? "You can make a mistake faster," Glover said in a video shown during Google's I/O 2024 livestream. "That's all you really want at the end of the day — at least in art — is just to make mistakes fast." 

Credit: Mashable screenshot from a Google promo

Speaking onstage in Hawaii at Google I/O, Google Deepmind CEO Demis Hassabis said, "Veo creates high quality 1080p videos from text image and video prompts." This makes Veo the same type of tool, with the same resolution as Sora on its highest setting. A slider shown in the demo shows a Veo video length being stretched out to a little over one minute, also the approximate length of a Sora video. 

Since Veo and Sora are both unreleased products, there's very little use trying to compare them in detail at this point. However, according to Hassabis, the interface will allow Veo users to "further edit your videos using additional prompts." This would be a function that Sora doesn't currently have according to creators who have been given access.

Tweet may have been deleted

What was Veo trained on? That's not currently clear. About a month ago, YouTube CEO Neal Mohan told Bloomberg that if OpenAI used YouTube videos to train Sora, that would be a "clear violation" of the YouTube terms of service. However, YouTube's parent company Alphabet also owns Google, which made Veo. Mohan strongly implied in that Bloomberg interview that YouTube does feed content to Google's AI models, but only, he claims, when users sign off on it.  

What we do know about the creation of Veo is that, according to Hassabis, this model is the culmination of Google and Deepmind's many similar projects, including Deepmind's Generative Query Network (GQN) research published back in 2018, last year's VideoPoet, Google's rudimentary video generator Phenaki, and Google's Lumiere, which was demoed earlier this year. 

Glover's specific AI-enabled filmmaking project hasn't been announced. According to the video at I/O, Glover says he's "been interested in AI for a couple of years now," and that he reached out to Google — and apparently not the other way around. "We got in contact with some of the people at Google and they had been working on something of their own, so we're all meeting," Glover says in Google's Veo demo video.  

There's currently no way for the general public to try Veo, but there is a waitlist signup page.

Google I/O: New Gemini App wants to be the AI assistant to top all AI assistants

Tue, 05/14/2024 - 14:55

Google's self-proclaimed "most intelligent AI experience" yet is redefining how humans interact with AI via a new app. The latest Gemini offering was announced at Google I/O on Tuesday.

"Our vision for the Gemini app is to be the most helpful, personal AI assistant by giving you direct access to Google's latest AI models," said Sissie Hsao, general manager for Gemini experiences and Google Assistant. The multimodal app —which integrates text, video, and newly-announced voice technology for more "natural" prompting — uses brand new Gemini technology and builds off of the new Gemini 1.5 Pro and Gemini 1.5 Flash, also announced today.

The app's adaptable voice feature, coined Gemini Live, debuts this summer, allowing users to have real time conversations with Google's AI helper. It will also incorporate the company's Project Astra video capabilities, touted as the next big visual assistant that can process video-based queries in real time.

In addition, Google will roll out new advanced features on the app through its Dynamic UI, like a trip planning assistant that incorporates Google's search, maps, calendars, and other features for personalized travel advice. Gemini Advanced users will have access to new massive storage and processing capabilities (think 30,000 lines of code, a 1,500 page thesis, or an hour long video), too.

SEE ALSO: Google's AI model just got faster with Gemini 1.5 Flash

But the company's biggest reality pitch for the app is its timesaving "Gems," or customizable prompts that users can save and pilot over and over again for very specific uses. Examples include a "Yoga Bestie" Gem, a "Calculus Tutor" Gem, and a "Sous Chef" Gem.

Credit: Google

The revolutionized mobile experience was previewed shortly after the company announced new workspace integrations with the company's AI teammates — another bid for Google's complete AI takeover. "As gemini and its capabilities continue to evolve," said general manager of Google Workspace Aparna Pappu,"We are diligently bringing that power into workspace to make all our users more productive and creative, both at home and at work."

Featured Video For You Here’s everything that was announced at Google I/O.

Gemini can now make generative memes in Google Messages

Tue, 05/14/2024 - 14:54

Now the unfunniest guy you know can be even less funny in the group chat.

At Google I/O on Tuesday, Google showed off what felt like dozens of new things you can do with its in-house Gemini AI, but one of them stood out more than the others. In Google Messages on Android devices, users will be able to bring up the Gemini overlay and create AI-generated meme response images to drag and drop into conversations.

SEE ALSO: Ask Photos is Google's new AI feature for Google Photos

Google said this will roll out "over the next few months" in a blog post.

Cool. Credit: Google

There isn't much to say about this, really. If you want to post a reaction image, but you're somehow not creative enough to just use one of the thousands of existing (and already funny) reaction images online, this is now a thing you can do.

Will your friends in the group chat enjoy it? Probably not, but that's not my problem.

Featured Video For You Here’s everything that was announced at Google I/O. SEE ALSO: Google Search at Google I/O: You can now ask questions with video and 3 other features

Gemini Nano can detect scam calls for you

Tue, 05/14/2024 - 14:47

Google I/O is underway, and the company is firing on all cylinders, introducing feature after feature that leverages AI to make your life easier.

Some of the features that caught our eye have to do with Gemini Nano, the company's smallest Gemini-based model. It's still pretty capable, but it's small enough to run entirely on device, meaning it can perform tasks faster than other versions of Gemini.

As Google demoed on I/O stage on Tuesday, Gemini Nano can help a person with poor eyesight get more textual context when they receive an image. Another pretty cool feature, also demoed on stage, is Gemini Nano intercepting a scam call.

In the demonstration, an unknown number calls the recipient's phone, and says, with suspiciously little info and context, that the victim's bank funds are being threatened, and that they should be moved to a safe location. At that moment, Gemini Nano interrupts, rightfully determining that the call is likely a scam, as banks will never ask you to move your money elsewhere to keep it safe.

As comforting as it is to know that there's someone helping you weed out scam calls, it is a bit disconcerting that an AI is listening to the content of your calls, but that's the new AI-powered world we're living in. Fortunately, since this is Nano we're talking, the audio is processed on the phone itself, so your data should remain on your device.

SEE ALSO: 'AI Teammate' announced at Google I/O 2024 — your new AI-powered co-worker friend

Gemini Nano has been available on Google's Pixel 8 and Pixel 8 Pro, but Google is also building it into Chrome, starting with version 126. There, it will power AI features including generating text when needed.

The scam call detection feature, in particular, is undergoing testing right now, and Google will have more to share "later this summer."

'AI Teammate' announced at Google I/O 2024 — your new AI-powered co-worker friend

Tue, 05/14/2024 - 14:41

If you've ever used an AI chatbot, you'd know it's a very singular experience. Your chatbot interacts with what an individual says or asks.

At Google I/O, as part of its Gemini for Workspace platform, the search giant shared a new feature: AI Teammate. AI Teammate basically takes that solo chatbot experience and puts the AI bot in a multi-user space, turning the AI chatbot into sort of a mock fellow co-worker.

SEE ALSO: Google I/O 2024: 'AI Agents' are AI personal assistants that can return your shoes Setting up an AI Teammate in Gemini for Workspace Credit: Google

According to Google, an AI Teammate can appear alongside others within the company in chat groups, emails, and documents just as any other employee would. The AI Teammate can have an identity – at Google I/O, they showcased an AI Teammate named "Chip" – as well as its own workspace account. The AI Teammate can have a specific role in the company and carry out specific objectives.

For example, if a user asks a question in a group chat, the AI Teammate can answer the question based on what's been addressed anytime earlier in the chat group. If the AI Teammate was added to emails or files, the chatbot can also utilize that information in its answers as well. Everyone in the group chat can see the chatbot's answers, just like they could any other co-worker, and further interact with the chatbot as well.

Google's AI Teammate "Chip" provides info in a group chat. Credit: Google

Basically, the main selling point here is that the AI Teammate can build a knowledge base based on what the entire team has shared and not just one singular user. That AI Teammate can then distribute that information to everyone as well.

It's an interesting take on the AI chatbot experience. There has been some third-party AI wrappers that have been developed with a similar idea in mind, but Google appears to be the first of the large language model companies to introduce this feature.

Google Search at Google I/O: You can now ask questions with video and 3 other features

Tue, 05/14/2024 - 14:17

Google's vision of an AI-driven future for search is coming to fruition with the rollout of AI Overviews in the US, and soon globally. This feature, previously dubbed the Search Generative Experience (SGE), introduces AI-generated summaries at the top of many search results, changing how billions interact with Google Search.

At the forefront of the tech giant's generative AI additions to search, revealed at its I/O 2024 event, are AI Overviews. This new tool aims to streamline search processes by presenting a concise, AI-generated summary at the top of search results. This new feature allows Google to provide AI-generated overviews or summaries directly in the search results, potentially offering more comprehensive answers to complex queries.

Here are some other key features that will come to Google Search in the coming weeks:

Search by Video Tweet may have been deleted

Google has expanded the functionality of Google Lens to include video-based search capabilities. Users can now capture video clips to initiate searches, allowing for a more dynamic and interactive query process. This feature caters to the growing demand for more intuitive and flexible search tools, accommodating various forms of user interaction with the digital world.

Planning with Search Credit: Google

Google's new planning tool automates the creation of personalized itineraries and meal plans with just a single query. By leveraging Gemini, this tool simplifies the planning process (e.g., travel or daily meals) by generating customized suggestions based on user preferences and previous interactions.

Contextual Search Credit: Google

Another feature is the AI-driven reorganization of search results based on the context of the query. For instance, when searching for restaurants in a new city, Google now categorizes the results to fit scenarios like date nights or business meetings. This tailored approach minimizes user effort in sifting through irrelevant information, making the search experience more efficient and user-friendly.

Featured Video For You Here’s everything that was announced at Google I/O.

Google's AI model just got faster with Gemini 1.5 Flash

Tue, 05/14/2024 - 13:58

Gemini is getting a facelift.

Sundar Pichai announced updates to Google's AI model at the company's annual Google I/O event on May 14, including updates to Gemini 1.5 Pro and a new model of Google Gemini called Gemini 1.5 Flash. 

"We want everyone to benefit from what Gemini can do," Pichai said.

Tweet may have been deleted

Gemini 1.5 Pro was originally announced in February, but the company announced at I/O that it's getting a bit of an update with new API features like video frame extraction, parallel function calling, and context caching. Pro will help support AI Overviews, Ask Photos, NotebookLM, and more. In the seemingly perpetual fight to the front of the AI world, Gemini 1.5 Pro is an invaluable resource for Google, which plans to use the AI model as the basis for Pixie — the new version of the Google Assistant — and other Google products.

And there's a new member to Google's family of AI models, showing the world a new model: Gemini 1.5 Flash. It has much of the same capabilities as 1.5 Pro but should run faster and more efficiently due to lower latency and a lower cost to serve.

As Josh Woodward, the senior director of product management inside Google's Labs incubator group, said at I/O, Gemini 1.5 Pro is for "complex and general tasks" and "high-quality responses," while Gemini 1.5 Flash is for "high-frequency and narrow tasks" and "fastest response time." Both of the products are "natively multimodal," which means you can prompt them with text, images, audio, and video.

SEE ALSO: Google I/O 2024: 'AI Agents' are AI personal assistants that can return your shoes

"Today, we're introducing Gemini 1.5 Flash," Demis Hassabis, CEO of Google DeepMind who introduced Flash, said. "Flash is a lighter-weight model compared to Pro. It's designed to be fast and cost-efficient to serve at scale while still featuring multi-model reasoning capabilities and breakthrough long context."

In a blog post, Hassabis said Gemini 1.5 Flash "excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more."

You can give both Gemini 1.5 Pro and Gemini 1.5 Flash today globally in Google AI Studio and Vertex AI. Developers can sign up to try two million tokens by heading to ai.google.dev/gemini-api. Gemini 1.5 Pro is $7 for one million tokens, but for prompts up to 128,000, it's just $3.50 for one million tokens. Gemini 1.5 Flash will start at $0.35 for one million tokens up to 128,000.

"We're so excited to see what all of you will create with it," Hassabis said.

Google I/O: Project Astra can tell where you live just by looking out the window

Tue, 05/14/2024 - 13:55

Google has a new AI agent that can tell you things about what's around you. A lot of things.

Called "Project Astra," it's a Gemini-based multimodal AI tool that lets you point your phone's camera at real-life stuff and get a spoken description of what you're looking at.

In a demo, shown during Google's I/O conference Tuesday, the tool was pointed at a loudspeaker, correctly identifying a part of it as a tweeter. Far more impressively, the phone's camera was then turned onto a snippet of code on a computer display, with Astra yielding a fairly detailed overview of what the code's doing.

Finally, the person testing Project Astra turned their phone towards the window and asked "What neighborhood do you think I'm in?" After a few seconds, Gemini replied: "This appears to be the King's Cross area of London," along with a few details about the neighborhood. Finally, the tool was asked to find a misplaced pair of glasses, and it complied, saying exactly where the glasses were left.

In perhaps the most interesting part of the video, we see that those glasses are actually some kind of smart glasses, which can again be used to prompt Gemini about what the wearer sees - in this case giving a suggestion on a diagram drawn on a whiteboard.

SEE ALSO: Google I/O 2024: 'AI Agents' are AI personal assistants that can return your shoes

According to Google's DeepMind CEO Demis Hassabis, something like Astra could be available both on a person's phone or glasses. The company did not, however, share a launch date, though Hassabis said that some of these capabilities are coming to Google products "later this year."

Featured Video For You Here’s everything that was announced at Google I/O.

Google I/O 2024: 'AI Agents' are AI personal assistants that can return your shoes

Tue, 05/14/2024 - 13:50

Google will be sharing lots of products ready to utilize at today's big Google I/O event.

However, one announcement from CEO Sundar Pichai was more of an idea that's in the works: AI Agents. According to Pichai, AI Agents are still in "early days," but their description shows what Google envisions what AI can do for users. 

What are 'AI Agents'?

Pichai described AI agents as "intelligent systems that show reasoning, planning, and memory" and can "think multiple steps ahead" to complete more compex tasks for users.

Shopping returns is a specific example used at Google I/O to give a real-world use case for AI agents. Pichai explained a scenario where a user wants to return a pair of shoes they purchased. AI agents will be able to search the user's email inbox for the receipt, locate the order number from the email, fill out the return form on the store's website, and schedule a pickup for the item to be returned.

Google showed off how AI agents could be of help when returning products to the store. Credit: Google I/O

Another scenario provided involves AI agents searching up local shops and services, like dog walkers and dry cleaners, for a user that just moved to a new city, so that the user has all of these locations and contacts at their disposal. A key feature mentioned here was that Gemini and Chrome would work together to complete these takes, showing how AI agents would be able to work across various software and platforms. 

It's an interesting concept, but also weird in some aspects. We'll keep our eyes peeled for AI agents in the near-to-maybe-not-so-near future.

Featured Video For You Here’s everything that was announced at Google I/O.

Ask Photos is Google's new AI feature for Google Photos

Tue, 05/14/2024 - 13:31

At Google's annual I/O conference, the tech giant announced a more powerful AI-based search experience within Google Photos.

People upload 6 billion photos and videos to the platform every day, and now they'll be able to search for those elements more efficiently using not just keywords but phrases.

SEE ALSO: Google I/O 2024: AI Overviews are rolling out to users this week

CEO Sundar Pichai gave one example: "Say you're at a parking station ready to pay," he told the audience, "but you can't recall your license plate number." Now you can ask Google Photos something like, "What's my license plate number again?"

It's unclear how Gemini will know to bring up your license plate number instead of all photos with a license plate in them, but Pichai says Gemini does.

Another example was reminiscing on family memories. Ask Google Photos, "When did Lucia learn to swim?" for example, says Pichai, and "Gemini goes beyond a simple search, recognizing different contexts from doing laps in the pool to snorkeling in the ocean to the text and dates on our swimming certificates." Then it packages those photos and videos together for you.

SEE ALSO: OpenAI isn't just competing with Google Search. It's coming for Google Assistant, Alexa & Siri, too.

Google I/O 2024: AI Overviews in Search are rolling out to users this week

Tue, 05/14/2024 - 13:25

It's been teased and tested for over a year, but it's finally here: AI Overviews in Search.

SEE ALSO: Google I/O 2024: What to expect after OpenAI's GPT-4o reveal

The first major announcement of Google's I/O 2024 event, the tech giant started off with the reveal that its Search Generative Experience (SGE) labs feature will be rolling out to US users within the week. This new feature allows Google to provide AI-generated overviews or summaries directly in the search results, potentially offering more comprehensive answers to complex queries.

The aim is to improve the user's experience by leveraging AI to aggregate and synthesize information from various web sources, delivering it in an easily digestible format. In May 2023, Google introduced its AI-enhanced search feature, which it characterized as an "AI-powered snapshot of key information to consider, with links to dig deeper."

However, the results were inconsistent when Mashable conducted tests on this feature on Google Labs. Reporter Cecily Mauran noted that this tool cluttered Google's results pages, overshadowed already useful features, and mentioned that unexpectedly encountering an AI-generated result could be somewhat disconcerting.

That doesn't mean that Google hasn't improved on SGE since then. The keynote has only started so you can watch live here.

Featured Video For You Here’s everything that was announced at Google I/O.

Marc Rebillet shows off 'Music FX DJ' ahead of Google I/O — and now, you can make sick beats, too

Tue, 05/14/2024 - 13:05

The new DJ mode for Google's Music FX got the best presentation possible ahead of its Google I/O 2024 event. Featuring acclaimed DJ Marc Rebillet, the new addition to the tech giant musical generative AI lets anyone become a middle-aged DJ in real-time.

SEE ALSO: Google I/O 2024 keynote: How to watch live

The DJ mode was first introduced in March, just months after Google released Music FX in December of last year. DJ Mode allows users to generate music through text prompts, focusing on instrumental tracks without the ability to include vocals or specific artist references. This mode is particularly innovative because it starts with user-generated descriptions or "chips," which can then be manipulated to create live beats and music loops. Users can adjust the music dynamically by changing the prompts, which are reflected in the live mix within seconds.

DJ Marc Rebillet at Google I/O Credit: Mashable / Google

Despite its capabilities, Google emphasizes that MusicFX is experimental and could be subject to changes or discontinuation. Therefore, it's recommended for casual exploration rather than serious or commercial applications. Like its counterpart, ImageFX, MusicFX uses a similar method of converting text prompts into adaptable options for creative output, though DJ Mode focuses specifically on music.

You can watch the Google I/O 2024 event here.

Pages