The company’s however, did not make an official comment on changing Bard’s name to Gemini…reports Asian Lite News
Google is planning some big changes to its artificial intelligence (AI) models, including reportedly changing the name of Bard to Gemini.
Android app developer Dylan Roussel has apparently leaked a company change-log that says “Bard is now Gemini” which is the new model to compete with OpenAI’s GPT-4.
“Bard is now Gemini. Gemini is the best way to get direct access to Google Al. All the collaborative capabilities you know and love are still here, and will keep getting better in the Gemini era,” reads the change-log.
The company’s however, did not make an official comment on changing Bard’s name to Gemini.
“We’ve also evolved the Ul to reduce visual distractions, improve legibility, and simplify the navigation,” according to the document.
The log said that Google will debut voice chat with Gemini, as well as a new “Ultra 1.0” model with “Gemini Advanced,” a paid plan that offers ChatGPT Plus-like file uploading features.
“Gemini Advanced gives you access to our most capable Al model, Ultra 1.0. With our Ultra 1.0 model, Gemini Advanced is far more capable at highly complex tasks like coding, logical reasoning, following nuanced instructions, and creative collaboration,” read the Google document.
Gemini Advanced will continue to expand with new and exclusive features in the coming months, including expanded multi-modal capabilities, even better coding features, as well as the ability to upload and more deeply analyse files, documents, data, and more.
Gemini Advanced is a paid plan available in over 150 countries and territories. The Gemini app is coming soon, starting with English.
Google, which brought Gemini Pro into its AI chatbot Bard in English last December, has now made it available in more than 230 countries and territories in over 40 languages, including nine Indian languages.
The nine Indian languages include — Hindi, Tamil, Telugu, Bengali, Kannada, Malayalam, Marathi, Gujarati, and Urdu.
In the past, this feature allowed you to view a webpage as Google sees it, which can be useful beyond just checking if a page is loading slowly…reports Asian Lite News
Google has officially retired its ‘cached’ web page feature as the company said that it was ‘no longer required’.
Google’s search liaison confirmed the development, saying, “It was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.”
In the past, this feature allowed you to view a webpage as Google sees it, which can be useful beyond just checking if a page is loading slowly.
Many users use this tool to verify the legitimacy of a website, and SEO managers can use it to examine their sites for problems. A lot of users, especially those in the news industry, check websites’ caches to determine if any content has been added or removed recently.
Earlier, an “about this result” dialogue with the Cached button at the bottom right would pop up on clicking on the three-dot menu next to a result.
Last month, Google removed some underutilised features in Google Assistant “to focus on delivering the best possible user experience”.
As per the list shared by the company, Google removed 17 features.
Some features being removed include — the functionality that allows users to use their voice to send an email, video, or audio message. Users will also not be able to use their voice to perform tasks such as making a reservation, sending a payment, or posting on social media.
The design facilitates a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and stylised generation…reports Asian Lite News
Google has introduced a new video generation AI model called Lumiere that uses a new diffusion model called Space-Time-U-Net, or STUNet. Lumiere creates 5-second videos in one process instead of putting smaller still frames together.
This technology figures out where things are in a video (space) and how they simultaneously move and change (time).
“We introduce Lumiere — a text-to-video diffusion model designed for synthesising videos that portray realistic, diverse and coherent motion — a pivotal challenge in video synthesis,” said Google researchers in a paper.
“We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model,” they wrote.
The design facilitates a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and stylised generation.
Lumiere can perform text-to-video generation, convert still images into videos, generate videos in specific styles using a reference image, apply consistent video editing using text-based prompts and create cinemagraphs by animating specific regions of an image.
The Google researchers said that the AI model outputs five-second-long 1024×1024 pixel videos, which they describe as “low-resolution.”
Lumiere also generates 80 frames compared to 25 frames from Stable Video Diffusion.
“There is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases to ensure a safe and fair use,” said the paper authors.
Pichai said that latest “role eliminations are not at the scale of last year’s reductions, and will not touch every team”….reports Asian Lite News
Google CEO Sundar Pichai has reportedly warned employees to brace themselves for more job cuts this year.
Google, which has let go over a thousand employees across various departments in the last one week or so, is likely to go for more job cuts, reports The Verge, citing an internal memo.
“We have ambitious goals and will be investing in our big priorities this year,” Pichai told employees in the memo.
“The reality is that to create the capacity for this investment, we have to make tough choices,” he added.
In the memo, Pichai said that latest “role eliminations are not at the scale of last year’s reductions, and will not touch every team”.
“But I know it’s very difficult to see colleagues and teams impacted,” the Google CEO added.
The layoffs this year are about “removing layers to simplify execution and drive velocity in some areas”.
“Many of these changes are already announced, though to be upfront, some teams will continue to make specific resource allocation decisions throughout the year where needed, and some roles may be impacted,” Pichai further wrote.
After laying off nearly 1,000 employees last week, Google is also reportedly slashing “a few hundred” more jobs in its advertising sales team as part of an ongoing restructuring exercise.
Philipp Schindler, Google’s chief business officer, told staff in a memo that the fresh job cuts “were the result of changes to how Google’s sales team operated”, Business Insider reported.
A Google spokesperson also confirmed that “a few hundred roles globally are being eliminated” as part of the restructuring.
In January last year, Google cut its workforce by 12,000 people, or around 6 per cent of its full-time employees.
The layoffs will primarily affect Google’s Large Customer Sales (LCS) unit, a team that sells ads to large businesses.
The Google Customer Solutions team (GCS), which sells ads to smaller clients, will now become the “core” ad sales team.
Google laid off some employees on its LCS team in October last year.
“Every year we go through a rigorous process to structure our team to provide the best service to our Ads customers,” a company spokesperson was quoted as saying.
“We map customers to the right specialist teams and sales channels to meet their service needs. As part of this, a few hundred roles globally are being eliminated and impacted employees will be able to apply for open roles or elsewhere at Google,” the spokesperson added.
Google had recently laid off workers in several departments, including hardware, central engineering teams, and Google Assistant.
In January last year, Google cut its workforce by 12,000 people, or around 6 per cent of its full-time employees.
The tech giant also made other job cuts to its recruiting and news divisions later in the year.
YouTube Trims Teams
Google-owned YouTube is reportedly laying off at least 100 employees from its creator management and operations teams.
YouTube Chief Business Officer Mary Ellen Coe announced the layoffs internally, reports Tubefilter. “YouTube will bring its content creator management teams together under dedicated central leadership in each individual country,” the report noted.
YouTube’s music and support teams are also being reportedly reorganised. In an internal staff memo, Coe said that these changes are intended to streamline YouTube’s business.
She, however, did not divulge how many employees are being affected. “As we have seen the past few years, our creator base is broadening and diversifying, from our most experienced creators to a new generation of casual creators posting on YouTube for the first time,” Coe wrote.
“Gen AI tools will further fuel new forms of creativity and bring even more creators to the platform,” she added. At the same time, “our subscription businesses have momentum, powered by partnerships with music, sports and media companies”.
“As the business evolves, we have an even greater need to ensure we’re running the business effectively and meeting the needs of all of our users,” Coe told the employees. Those being laid off will have a chance to apply for other roles at YouTube. However, “it was not clear if they are guaranteed new positions within the company”.
“Each one of you has been a valued and meaningful part of our team, and we’ll be here to support you as you consider next steps,” said Coe.
Qualcomm announced its premium Snapdragon 8 Gen 3 Mobile Platform for Galaxy is powering latest flagship Galaxy S24 Ultra globally and Galaxy S24 Plus and S24 in select regions...reports Asian Lite News
Samsung and Google Cloud have announced a new, multi-year partnership to bring Google Cloud’s generative artificial intelligence (AI) technology to its smartphones, starting with the new Galaxy S24 series.
Samsung will be the first Google Cloud partner to deploy Gemini Pro and Imagen 2 on Vertex AI via the cloud to their smartphone devices.“We’re thrilled that the Galaxy S24 series is the first smartphone equipped with Gemini Pro and Imagen 2 on Vertex AI,” said Janghyun Yoon, Corporate EVP and Head of Software Office of Mobile eXperience Business at Samsung.Gemini can generalise and seamlessly understand, operate across, and combine different types of information including text, code, images, and video.Starting with Samsung-native applications, users can take advantage of the summarization feature across Notes, Voice Recorder, and Keyboard. Gemini Pro on Vertex AI provides Samsung with critical Google Cloud features, including security, safety, privacy, and data compliance.
Galaxy S24 series users can also immediately benefit from Imagen 2, Google’s most advanced text-to-image diffusion technology from Google DeepMind to date.With Imagen 2 on Vertex AI, Samsung can bring safe and intuitive photo-editing capabilities into users’ hands. These features can be found in Generative Edit2 in S24’s Gallery application.“With Gemini, Samsung’s developers can leverage Google Cloud’s world-class infrastructure, cutting-edge performance, and flexibility to deliver safe, reliable, and engaging generative AI powered applications on Samsung smartphone devices,” said Thomas Kurian, CEO, Google Cloud.Meanwhile, chip giant Qualcomm announced its premium Snapdragon 8 Gen 3 Mobile Platform for Galaxy is powering latest flagship Galaxy S24 Ultra globally and Galaxy S24 Plus and S24 in select regions.
“Snapdragon 8 Gen 3 for Galaxy instills its advanced AI capabilities in the Galaxy S24 series, to enable new experiences with AI features to empower users’ everyday life,” said Chris Patrick, senior vice president and general manager of mobile handset, Qualcomm.It also fuels advanced professional-quality camera, gaming experiences and ultra-fast connectivity including Wi-Fi 7, he added.
This tool should be powered by Google’s Imagen family of models, the report mentioned…reports Asian Lite News
Google is reportedly planning to add its own image generator directly to its AI chatbot Bard. As shared by developer Dylan Roussel on X, an unpublished Google Bard changelog — dated for January 18 — showed how you can “Create images with Bard”, reports 9to5Google.
“Here’s what’s coming next in Bard. . . tomorrow. Image generation with Bard will use Imagen, Google’s Text-to-Image “diffusion technology,” Roussel wrote.
However, the developer also noted that the “content of this changelog may still be changed until officially released”.
Similar to other tools, it will allow users to create images by simply describing their imagination in words to the chatbot.
This tool should be powered by Google’s Imagen family of models, the report mentioned.
Meanwhile, Google has added a new feature in Maps that will let users navigate in tunnels or other satellite dead zones. The company added support for ‘Bluetooth beacons’ and has rolled out widely on Google Maps for Android, however, it still missing in the iOS version of the app.
Bluetooth beacons are not new as the Google-owned Waze has long supported the technology in tunnels globally, including major cities like New York City, Chicago, Paris, Brussels, and many more.
Those beacons, though, have only ever functioned within the Waze app.
In line with these goals, there is a need to establish new institutional frameworks and public-private partnerships along with implementing multilateral controls to aid and enhance these efforts…reports Asian Lite News
The AI Governance Alliance (AIGA) released on Thursday a series of three new reports on advanced artificial intelligence (AI). The papers focus on generative AI governance, unlocking its value and a framework for responsible AI development and deployment.
The alliance brings together governments, businesses and experts to shape responsible AI development applications and governance, and to ensure equitable distribution and enhanced access to this path-departing technology worldwide.
“The AI Governance Alliance is uniquely positioned to play a crucial role in furthering greater access to AI-related resources, thereby contributing to a more equitable and responsible AI ecosystem globally,” said Cathy Li, Head, AI, Data and Metaverse, World Economic Forum.
“We must collaborate among governments, the private sector and local communities to ensure the future of AI benefits all.”
The AIGA is calling upon experts from various sectors to address several key areas. This includes improving data quality and availability across nations, boosting access to computational resources, and adapting foundation models to suit local needs and challenges. There is also a strong emphasis on education and the development of local expertise to create and navigate local AI ecosystems effectively.
In line with these goals, there is a need to establish new institutional frameworks and public-private partnerships along with implementing multilateral controls to aid and enhance these efforts.
While AI holds the potential to address global challenges, it also poses risks of widening existing digital divides or creating new ones. These and other topics are explored in a new briefing paper series, released on Thursday and crafted by AIGA’s three core workstreams, in collaboration with IBM Consulting and Accenture.
As AI technology evolves at a rapid pace and developed nations race to capitalize on AI innovation, the urgency to address the digital divide is critical to ensure that billions of people in developing countries are not left behind.
On international cooperation and inclusive access in AI development and deployment, Generative AI Governance: Shaping Our Collective Global Future — from the Resilient Governance and Regulation track — evaluates national approaches, addresses key debates on generative AI, and advocates for international coordination and standards to prevent fragmentation.
The AIGA also seeks to mobilize resources for exploring AI benefits in key sectors, including healthcare and education.
“As we witness the rapid evolution of artificial intelligence globally, the UAE stands committed to fostering an inclusive AI environment, both within our nation and throughout the world,” Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications of the United Arab Emirates, said.
“Our collaboration with the World Economic Forum’s AI Governance Alliance is instrumental in making AI benefits universally accessible, ensuring no community is left behind. We are dedicated to developing a comprehensive and forward-thinking AI and digital economy roadmap, not just for the UAE but for the global good.”
Objective structured clinical examination (OSCE) is a practical assessment commonly used in the real world to examine clinicians’ skills and competencies in a standardised and objective way…reports Asian Lite News
Tech giant Google has developed a novel chatbot which can converse with patients and make diagnostic reasoning at par with human doctors.
The Articulate Medical Intelligence Explorer (AMIE), a conversational diagnostic research AI system, is based on a large language model (LLM) developed by Google, and can deliver results across a multitude of disease conditions, specialties and scenarios.
“We trained and evaluated AMIE along many dimensions that reflect quality in real-world clinical consultations from the perspective of both clinicians and patients,” Alan Karthikesalingam and Vivek Natarajan, Research Leads, Google Research, wrote in a blog post.
“The physician-patient conversation is a cornerstone of medicine, in which skilled and intentional communication drives diagnosis, management, empathy and trust. AI systems capable of such diagnostic dialogues could increase availability, accessibility, quality and consistency of care by being useful conversational partners to clinicians and patients alike. But approximating clinicians’ considerable expertise is a significant challenge,” they added.
The AMIE has been trained on real-world datasets comprising medical reasoning, medical summarisation, and real-world clinical conversations.
To train the chatbot, the team developed a novel self-play based simulated diagnostic dialogue environment with automated feedback mechanisms in a virtual care setting. They also employed an inference time chain-of-reasoning strategy to improve AMIE’s diagnostic accuracy and conversation quality.
AMIE’s performance was tested in consultations with simulated patients (played by trained actors), compared to those performed by 20 real board-certified primary care physicians (PCPs).
“AMIE and PCPs were assessed from the perspectives of both specialist attending physicians and our simulated patients in a randomised, blinded crossover study that included 149 case scenarios from OSCE providers in Canada, the UK, and India in a diverse range of specialties and diseases,” the researchers said.
Objective structured clinical examination (OSCE) is a practical assessment commonly used in the real world to examine clinicians’ skills and competencies in a standardised and objective way.
AMIE performed simulated diagnostic conversations at least as well as PCPs when both were evaluated along multiple clinically-meaningful pointers of consultation quality. This included history-taking, diagnostic accuracy, clinical management, clinical communication skills, relationship fostering and empathy.
AMIE had greater diagnostic accuracy and superior performance for 28 of 32 pointers from the perspective of specialist physicians, and 24 of 26 from the perspective of patient actors.
“Our research has several limitations and should be interpreted with appropriate caution. Clinicians were limited to unfamiliar synchronous text-chat which permits large-scale LLM-patient interactions but is not representative of usual clinical practice. While further research is required before AMIE could be translated to real-world settings, the results represent a milestone towards conversational diagnostic AI,” the researchers wrote in the paper, published on the arXiv preprint server.
Amazon-owned live game streaming platform Twitch is reportedly laying off 35 per cent of its workforce, or about 500 employees, this week….reports Asian Lite News
Amazon is reportedly laying off several hundreds of employees in its Prime Video and MGM Studios.
Mike Hopkins, Senior Vice President of the division, announced the cuts in an email on Wednesday, saying that the reason for the reduction is to “reduce or discontinue investments in certain areas while increasing our investment and focus on content and product initiatives that deliver the most impact”, reports TechCrunch.
The company has also started to notify the affected workers in the US and will inform most other regions by the end of this week.
Affected employees are provided with packages that include separation payments, transitional benefits, and external career transition support, the report mentioned.
“Our prioritisation of initiatives that we know will move the needle, along with our continued investments in programming, marketing and product, positions our business for an even stronger future,” Hopkins said.
Meanwhile, Amazon-owned live game streaming platform Twitch is reportedly laying off 35 per cent of its workforce, or about 500 employees, this week. Twitch laid off dozens of employees last year, and has shut down its service in South Korea due to “prohibitively expensive” costs.
According to a Bloomberg report, the fresh job cuts, “which could be announced as soon as Wednesday”, come amid concerns over losses at Twitch.
Meta Joins Layoff Club
Meta has started the New Year with laying off some technical programme managers (TPMs) at Instagram and reports said that at least 60 such jobs are either being consolidated or eliminated.
According to a post on Blind, an anonymous forum and community for verified tech employees, the company has given these employees time until the end of March to re-interview for product management roles or other jobs.
A verified Meta professional noted in the thread that job cuts “will soon (be) expanded to other orgs for TPMs”.
It means other technical programme managers at Meta may also find their roles consolidated or reorganised away.
“Meta layoffs: all TPMs in Instagram laid off today. Confirmed by my spouse who works there. She is not in the Instagram org and not affected. Product managers are not affected,” read another Blind post.
According to Business Insider, at least 60 such employees have lost their jobs.
TPMs are somewhere positioned between technical workers like engineers and product managers (PMs).
A former Instagram employee posted to LinkedIn about “expected changes to TPM roles,” saying that people are expected to “re-interview for PM roles” or product manager roles.
Meta did not immediately comment on these layoffs.
After planned mass layoffs last year, Meta Founder and CEO Mark Zuckerberg has not denied “that more jobs would be eliminated in the future”.
According to the report, he was still aiming to reduce the company’s overall headcount to that of 2020 before it went on the mass hiring spree.
In March last year, Zuckerberg announced the company would cut 10,000 jobs in the coming months, along with newly reorganised teams and management hierarchies.
The fresh cuts came just four months after Meta laid off 11,000 employees, or 13 per cent of the company’s workforce, in November 2022.
Google Axes AR Staff
Google is laying off hundreds of hardware employees, especially in the augmented reality (AR) division while Fitbit co-founders James Park, Eric Friedman and other Fitbit leaders are reportedly leaving the company.
Google had acquired wearable company Fitbit for $2.1 billion in 2019.
“A few hundred roles are being eliminated in DSPA (Devices and Services) with the majority of impacts on the 1P AR Hardware team,” a Google spokesperson said in a statement.
The Devices & Services teams are responsible for Pixel, Nest, and Fitbit devices. “While we are making changes to our 1P AR hardware team, Google continues to be deeply committed to other AR initiatives, such as AR experiences in our products, and product partnerships,” the spokesperson told 9to5Google.
The company said that it remains committed to “serving our Fitbit users well, innovating in the health space with personal AI, and building on the momentum with Pixel Watch, the redesigned Fitbit app, Fitbit Premium service, and the Fitbit tracker line”.
“This work will continue to be a key part of our new org model,” said the tech giant.
Google is switching to a functional organisation model where there will be one team responsible for hardware engineering across Pixel, Nest, and Fitbit hardware. There will be a single leader for such products across all Google hardware, according to reports.
Google has shifted its work on AR to the Android and hardware teams.
The Google VP also discussed the investment plans of the company and told the Chief Minister that artificial intelligence was set to bring major transformation in various sectors…reports Asian Lite News
Google Vice President Chandrasekhar Thota called on Telangana Chief Minister Revanth Reddy on Thursday and said that the technology major has evinced interest in working together with the State Government.
Thota paid a courtesy call on the Chief Minister on Thursday at the CM’s residence.
Thota further said that Google was excited to partner with the state in developing a digitization agenda for Telangana in farming, education and health.
The IT major had deep technology and expertise for bringing quality service to serve the needs of the people of the State.
The Google VP also discussed the investment plans of the company and told the Chief Minister that artificial intelligence was set to bring major transformation in various sectors.
As per the official statement, the Telangana CM discussed road safety improvements using Google Maps and Google Earth platforms.
IT and Industries Minister D Sridhar Babu and Roads and Buildings Minister Komatireddy Venkat Reddy were also present at the meeting.
Earlier on January 10, Telangana CM Revanth Reddy hosted representatives of 13 countries for dinner in Hyderabad last night. The dinner saw attendance of representatives of United States of America, Iran, Turkey, UAE, UK, Japan, Thailand, Germany, Sri Lanka, Bangladesh, Australia, France and Finland.
The Telangana Chief Minister appealed to the respective countries to explore investment opportunities in our State. he assured that the government will collaborate and maintain cordial relationship with everyone.
Chief Minister Revanth Reddy also met representatives of Amazon briefed the Chief Minister about their company’s investments in Telangana. Deputy Chief Minister Mallu Bhatti and other ministers as well as Chief Secretary Industries Department Jayesh Ranjan and other officials participated in this meeting.
On January 7 the Revanth Reddy government completed one month in power. Speaking on the occasion the Chief Minister said “The month-long journey that took place while maintaining the word that the servants are not the rulers… bringing the governance closer to the people… and assuring that I am there gave me a new experience. Listening to the voices of the poor… paving the way for the future of the youth… seeing the happiness on the faces of our girls… reassuring the farmers… the month-long walk is taking steps towards a bright future. This month-long administration has been responsible for the commitment to investments… laying a big emphasis on industrial growth… carving engravings for the development of cities… I will continue to fulfill my responsibility.” (ANI)