As part of this collaboration, selected Wipro employees will have full access to IISc faculty members, online lectures, libraries and alumni networks….reports Asian Lite news
Amid the government’s call to bridge the skill gap in emerging technologies, IT major Wipro on Thursday announced a collaboration with the Indian Institute of Science (IISc) to offer eligible employees a higher education programme in artificial intelligence (AI).
The online Master’s in Technology (MTech) course will emphasise upon key areas such as AI, foundations of ML/AI, data science and business analytics, addressing the growing demand for skilled professionals in these domains, the company said in a statement.
“GenAI is evolving at a rapid pace, and we are confident that selected employees will gain immensely from the knowledge at IISc and develop capabilities for the opportunities ahead delivering strong business outcomes,” said Sanjeev Jain, SVP and Global Head, Business Operations, Wipro.
As part of this collaboration, selected Wipro employees will have full access to IISc faculty members, online lectures, libraries and alumni networks.
They will also benefit from mentorship by seasoned professionals from the data, analytics and AI practice at Wipro.
The acceptance of the programme will be subject to rigorous entrance tests and evaluations designed by IISc, said the company.
“The programme curriculum for working professionals has been designed with the same high standards as our full-time programmes, with our faculty members delivering content online to train students on foundational concepts and real-world applications,” said Professor Rajesh Sundaresan, Dean, Division of EECS, IISc.
Considering that more than 60 countries, including India, are entering election mode this year, it is vital that we remain vigilant on recent trends in the dynamic digital landscape, especially deepfakes, says Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro.
With the widespread use of generative AI, we face a new and concerning threat: deepfakes.
“Deepfakes have become accessible to everyone, posing a significant risk as these manipulations allow the creation and dissemination of realistic audio and video content featuring individuals saying and doing things they never actually said or did,” emphasised Bartoletti, also the founder of the ‘Women Leading in AI Network’.
The consequences extend beyond the digital realm, as online disinformation and coordination can spill over into real-world violence.
In India, the government has issued an update to its AI advisory, saying that the big digital companies do not need the government’s permission anymore before launching any AI model in the country.
However, big tech companies are advised to label “under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.”
“Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” according to the new MeitY advisory.
All intermediaries or platforms must ensure that the use of AI models /LLM/Generative AI, software or algorithms “does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act.”
The digital platforms have been asked to comply with new AI guidelines with immediate effect.
According to Bartoletti, to ensure public safety, companies must take responsibility and implement measures to combat deepfakes and disinformation.
“This includes investing in advanced detection technologies to identify and flag deepfake content, as well as collaborating with experts to develop effective debunking methods,” she noted.
Additionally, promoting media literacy and critical thinking among the public is crucial.
“By taking proactive steps to address the risks of deepfakes, we can protect the integrity of elections and uphold the democratic process,” said Bartoletti.