Categories
Tech Lite Technology

ChatGPT is politically biased, finds study

These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text…reports Asian Lite News

OpenAI’s artificial intelligence chatbot ChatGPT has a significant and systemic Left-wing bias, according to a new study.

Published in the journal ‘Public Choice’, the findings show that ChatGPT’s responses favour the Democrats in the US, the Labour Party in the UK, and President Lula da Silva of the Workers’ Party in Brazil.

Concerns of an inbuilt political bias in ChatGPT have been raised previously but this is the first large scale study using a consistent, evidenced-based analysis. “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible,” said lead author Fabio Motoki of Norwich Business School at the University of East Anglia in the UK.

“The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, the existing challenges posed by the Internet and social media,” Motoki said. The researchers developed an innovative new method to test ChatGPT’s political neutrality.

The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.

The responses were then compared to the platform’s default answers to the same set of questions — allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance. To overcome difficulties caused by the inherent randomness of ‘large language models’ that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses were collected.

These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.

“Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum,” said co-author Victor Rodrigues.  A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions.

In a ‘placebo test’, it was asked politically-neutral questions. And in a ‘profession-politics alignment test’, it was asked to impersonate different types of professionals.

In addition to political bias, the tool can be used to measure other types of biases in ChatGPT’s responses. While the research project did not set out to determine the reasons for the political bias, the findings did point towards two potential sources.

The first was the training dataset — which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.

ALSO READ-ChatGPT maker OpenAI likely to go bankrupt by 2024

Categories
Lite Blogs Tech Lite Technology

ChatGPT maker OpenAI likely to go bankrupt by 2024

A recent report by Investopedia claimed that it is too early for any AI leading company, like OpenAI, Anthropic, or Inflection, to head into the initial public offering (IPO) market…reports Asian Lite News

ChatGPT maker OpenAI is likely to go bankrupt by the end of 2024 if it doesn’t get more funding soon, according to media reports.

Analytics India Magazine reported that ChatGPT website has seen a continuous user decline in the first six months of the year.

The users declined to 1.5 billion in July from 1.7 billion in June and 1.9 billion in May, revealed data from analytics company SimilarWeb. This also doesn’t include APIs or the ChatGPT mobile app.

While one theory holds that in May students were out of school, the other says that people started building their own bots, instead of using the original offering.

“I am no longer allowed to use ChatGPT at work, but we have developed our own internal model based on ChatGPT,” a user said in a tweet.

Another issue is that after OpenAI developed ChatGPT, which has created a ruckus in the job market over fears that it may replace human creativity, its losses have doubled to around $540 million last year, according to a report by The Information in May.

This comes even as ChatGPT reportedly costs a whopping $700,000 (Rs 5.80 crore) per day to operate. Even OpenAI CEO Sam Altman had admitted in a tweet that “compute costs are eye-watering”.

A recent report by Investopedia claimed that it is too early for any AI leading company, like OpenAI, Anthropic, or Inflection, to head into the initial public offering (IPO) market.

“It is because it takes at least 10 years of operation and $100 million in revenue for an IPO to be successful,” the report said. In addition, billionaire Elon Musk is also increasing pressure with claims of building a rival chatbot.

While Microsoft-backed OpenAI has projected an annual revenue of $200 million in 2023, and aims to reach $1 billion in 2024, its losses are mounting. It is majorly surviving on Microsoft’s $10 billion investment.

ALSO READ-Researchers ‘hypnotise’ ChatGPT into hacking

Categories
-Top News Tech Lite UK News

OpenAI to roll out ‘huge set’ of ChatGPT updates   

Reacting to his post, a number of users praised the new ChatGPT updates. A user wrote, “Great updates. Would love the ability to search through the history”…reports Asian Lite News

Microsoft-owned OpenAI’s first developer advocate and developer relations expert — Logan Kilpatrick, has posted on X (formerly Twitter) that a “huge set of ChatGPT updates are rolling out over the next week”.

Among the new features Kilpatrick highlighted are example prompts, suggested replies and follow-up questions, a default GPT-4 setting so that paying ChatGPT Plus subscribers don’t have to toggle on the latest/most advanced publicly available OpenAI large language model (LLM) every time they start a new chat, support for multiple file uploads for all Plus users when using the OpenAI Code Interpreter plugin, and more.

Reacting to his post, a number of users praised the new ChatGPT updates. A user wrote, “Great updates. Would love the ability to search through the history”.

“Everyone wants this and I hope it lands eventually! Search is live in iOS btw,” Kilpatrick replied. “These are great changes, congrats to the team! If possible, please consider making the pages translated and localised. Most people know that we can interact with it in many languages, but the landing pages and interaction pages are a major hurdle to people that don’t speak English,” another user said.

One more user commented, “Very solid updates. Love the default/suggested prompts! Text-based interfaces are very powerful but users still don’t like looking at an empty text box. Much better for AI-powered apps to present users with context-aware suggestions that can be customised as needed”. Last month, OpenAI introduced a new ‘customised instructions’ feature for ChatGPT, that allows users to share anything with the artificial intelligence (AI)-chatbot for future conversations.

“Custom instructions are currently available in Beta for Plus users, and we plan to roll out to all users soon,” the company said in an article. Users can edit or delete custom instructions at any time for new conversations.

ALSO READ-ChatGPT fined for exposing personal info of 687 S. Koreans

Categories
-Top News Asia News Tech Lite

ChatGPT fined for exposing personal info of 687 S. Koreans

A total of 687 users in South Korea have been confirmed to be among those affected by the exposure…reports Asian Lite News

South Korea’s Personal Information Protection Commission (PIPC) on Thursday imposed a fine of 3.6 million won ($2,829) on OpenAI, the operator of the generative chatbot ChatGPT, for exposing personal information of its 687 citizens.

According to OpenAI, a now-patched bug in an open-source library on ChatGPT created a caching issue in March. It caused an unintentional visibility of payment information of ChatGPT Plus subscribers during a nine-hour window, including first and last names, email addresses, the last four digits of credit card numbers and credit card expiration dates, Yonhap reported.

A total of 687 users in South Korea have been confirmed to be among those affected by the exposure.

The PIPC said it has fined OpenAI for breaching its duty to report a leakage to authorities within 24 hours of finding it. But the privacy watchdog concluded the company cannot be held responsible for lax personal information protection measures.

The watchdog has also recommended OpenAI take measures to prevent a recurrence of the incident, comply with South Korea’s personal information protection law and cooperate actively with the commission’s prior inspection activities, it said.

ALSO READ-Edtech firms in a real fix

Categories
-Top News Tech Lite USA

US med school experiments on Chat-GPT 4

Chat-GPT 4 also provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases, revealed the findings, published in JAMA…reports Asian Lite News

In a significant experiment, a US medical school used Open AI’s Chat-GPT 4 to see if it can make accurate diagnoses in challenging medical cases.

Physician-researchers at Beth Israel Deaconess Medical Center (BIDMC) in Boston found that Chat-GPT 4 selected the correct diagnosis nearly 40 per cent of the time.

Chat-GPT 4 also provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases, revealed the findings, published in JAMA.

“Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardised medical examinations,” said Adam Rodman, co-director of the Innovations in Media and Education Delivery (iMED) Initiative at BIDMC.

“We wanted to know if such a generative model could ‘think’ like a doctor, so we asked one to solve standardised complex diagnostic cases used for educational purposes. It did really, really well,” said Rodman, also an instructor in medicine at Harvard Medical School.

To assess the chatbot’s diagnostic skills, Rodman and colleagues used clinicopathological case conferences (CPCs), a series of complex and challenging patient cases including relevant clinical and laboratory data, imaging studies, and histopathological findings published in the New England Journal of Medicine for educational purposes.

Evaluating 70 CPC cases, the artificial intelligence exactly matched the final CPC diagnosis in 27 (39 per cent) of cases. In 64 per cent of the cases, the final CPC diagnosis was included in the AI’s differential — a list of possible conditions that could account for a patient’s symptoms, medical history, clinical findings and laboratory or imaging results.

“While chatbots cannot replace the expertise and knowledge of a trained medical professional, generative AI is a promising potential adjunct to human cognition in diagnosis,” said first author Zahir Kanjee, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School.

“It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking,” he said.

While the study adds to a growing body of literature demonstrating the promising capabilities of AI technology, more research is needed on its optimal uses, benefits and limits, importantly on privacy issues to understand how these new AI models might transform health care delivery.

ALSO READ-Samsung may integrate ChatGPT into Internet Browser app

Categories
Tech Lite Technology USA

ChatGPT officiates wedding in absence of priest

This is not the first time that an AI chatbot like ChatGPT has done something unusual like this…reports Asian Lite News

In an unprecedented but heartwarming event, OpenAI’s chatbot ChatGPT stepped up to officiate the wedding of a US couple when faced with the unanticipated absence of a priest.

Reece Wiench and Deyton Truitt celebrated their wedding last weekend with the voice of the ChatGPT AI app leading the way, report Fox News.

“Thank you all for joining us today to celebrate the extraordinary love and unity of Reece Wiench and Deyton Truitt,” the chatbot said at the couple’s wedding last month.

Wiench and Truitt said that they planned their wedding in five days because Truitt was about to deploy for the Army and Wiench wanted to join him after basic training.

In the US-based Colorado, there is no requirement for a licensed marriage official to officiate ceremonies, therefore, the bride’s father, Stephen Wiench, came up with the idea of using a more accessible and cost-effective officiant option.

The chatbot was at first hesitant to conduct the ceremony, according to the report.

“It said ‘no’ at first. ‘I can’t do this, I don’t have eyes, I don’t have a body. I can’t officiate at your wedding,'” Wiench was quoted as saying. The couple persisted and provided personal information about themselves to the chatbot, which was woven into ChatGPT’s remarks during the ceremony.

This is not the first time that an AI chatbot like ChatGPT has done something unusual like this.

Last week, a woman revealed that her long-time client stopped working with her after discovering that she was using a ChatGPT-like artificial intelligence (AI) to write content.

Last month, a US judge sanctioned the lawyer who submitted a legal brief written by ChatGPT, which included citations of non-existent court opinions and fake quotes.

ALSO READ-ChatGPT 4 excels at picking the right imaging tests

Categories
Tech Lite

ChatGPT 4 excels at picking the right imaging tests

They asked the AI in an open-ended way and by giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version…reports Asian Lite News

OpenAI’s ChatGPT can support the clinical decision-making process, including when picking the correct radiological imaging tests for breast cancer screening or breast pain, finds a study.

The study by investigators from Mass General Brigham in the US, suggests that large language models have the potential to assist decision-making for primary care doctors and referring providers in evaluating patients and ordering imaging tests for breast pain and breast cancer screenings. Their results are published in the Journal of the American College of Radiology.

“In this scenario, ChatGPT’s abilities were impressive,” said corresponding author Marc D. Succi, associate chair of Innovation and Commercialisation at Mass General Brigham Radiology and executive director of the MESH Incubator.

“I see it acting like a bridge between the referring healthcare professional and the expert radiologist — stepping in as a trained consultant to recommend the right imaging test at the point of care, without delay.

“This could reduce administrative time on both referring and consulting physicians in making these evidence-backed decisions, optimise workflow, reduce burnout, and reduce patient confusion and wait times,” Succi said.

In the study, the researchers asked ChatGPT 3.5 and 4 to help them decide what imaging tests to use for 21 made-up patient scenarios involving the need for breast cancer screening or the reporting of breast pain using the appropriateness criteria.

They asked the AI in an open-ended way and by giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version.

ChatGPT 4 outperformed 3.5, especially when given the available imaging options. For example, when asked about breast cancer screenings, and given multiple choice imaging options, ChatGPT 3.5 answered an average of 88.9 per cent of prompts correctly, and ChatGPT 4 got about 98.4 per cent right.

“This study doesn’t compare ChatGPT to existing radiologists because the existing gold standard is actually a set of guidelines from the American College of Radiology, which is the comparison we performed,” Succi said.

“This is purely an additive study, so we are not arguing that the AI is better than your doctor at choosing an imaging test but can be an excellent adjunct to optimise a doctor’s time on non-interpretive tasks.”

ALSO READ-Benz adds ChatGPT to voice control of its vehicles

Categories
Tech Lite

ChatGPT on iOS gets ‘Drag and Drop’ support

OpenAI has the ChatGPT app only for iOS and has an Android version in the plans, which it promised to bring soon to the market…reports Asian Lite News

OpenAI on Thursday updated the ChatGPT application on iOS and iPadOS with the introduction of ‘Drag and Drop’ support.

With this new feature, individual messages can now be dragged and dropped into other applications.

The application now also takes advantage of the entire iPad screen with the new update. Moreover, the company introduced Siri and Shortcuts integration for ChatGPT, which means that the app can now be used directly with Siri and Shortcuts.

Last month, the ChatGPT app was released for iOS users in India as the company expanded the availability of the app to more countries.

Currently, OpenAI has the ChatGPT app only for iOS and has an Android version in the plans, which it promised to bring soon to the market.

Meanwhile, the company introduced a feature called ‘shared links’ for the app, which allows the users to create and share ChatGPT conversations with others.

“Recipients of your shared link can either view the conversation or copy it to their own chats to continue the thread. This feature is currently rolling out to a small set of testers in alpha, with plans to expand to all users (including free) in the upcoming weeks,” OpenAI said.

ALSO READ-ChatGPT performs poorly at US’ urologists exam

Categories
-Top News Tech Lite USA

ChatGPT performs poorly at US’ urologists exam

The explanations provided by ChatGPT were longer than those provided by SASP, but “frequently redundant and cyclical in nature”, according to the authors…reports Asian Lite News

The much-acclaimed OpenAI’s ChatGPT chatbot has failed a urologist exam in the US, according to a study.

This comes at a time of growing interest in the potential role of artificial intelligence (AI) technology in medicine and healthcare.

The study, reported in the journal Urology Practice, showed that ChatGPT achieved less than a 30 per cent rate of correct answers on the American Urologist Association’s widely used Self-Assessment Study Program for Urology (SASP).

“ChatGPT not only has a low rate of correct answers regarding clinical questions in urologic practice, but also makes certain types of errors that pose a risk of spreading medical misinformation,” said Christopher M. Deibert, from University of Nebraska Medical Center.

The AUA’s Self-Assessment Study Program (SASP) is a 150-question practice examination addressing the core curriculum of medical knowledge in urology.

The study excluded 15 questions containing visual information such as pictures or graphs.

Overall, ChatGPT gave correct answers to less than 30 per cent of SASP questions, 28.2 per cent of multiple-choice questions and 26.7 per cent of open-ended questions.

The chatbot provided “indeterminate” responses to several questions. On these questions, accuracy was decreased when the LLM model was asked to regenerate its answers.

For most open-ended questions, ChatGPT provided an explanation for the selected answer.

The explanations provided by ChatGPT were longer than those provided by SASP, but “frequently redundant and cyclical in nature”, according to the authors.

“Overall, ChatGPT often gave vague justifications with broad statements and rarely commented on specifics,” Dr. Deibert said.

Even when given feedback, “ChatGPT continuously reiterated the original explanation despite it being inaccurate”.

The researchers suggest that while ChatGPT may do well on tests requiring recall of facts, it seems to fall short on questions pertaining to clinical medicine, which require “simultaneous weighing of multiple overlapping facts, situations and outcomes”.

“Given that LLMs are limited by their human training, further research is needed to understand their limitations and capabilities across multiple disciplines before it is made available for general use,” Dr. Deibert said.

“As is, utilisation of ChatGPT in urology has a high likelihood of facilitating medical misinformation for the untrained user.”

ALSO READ-US judge orders lawyers not to use ChatGPT-drafted content  

Categories
-Top News USA

US judge orders lawyers not to use ChatGPT-drafted content  

Lawyer Steven A. Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research…reports Asian Lite News

A US federal judge has categorically told lawyers that he will not allow any AI-generated content in his court.

Texas federal judge Brantley Starr said that any attorney appearing in his court must attest that “no portion of the filing was drafted by generative artificial intelligence”, or if it was, that it was checked “by a human being”, reports TechCrunch.

“All attorneys appearing before the court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” read the standing order.

According to the judge, these AI platforms are incredibly powerful and have many uses in the law — form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument.

“But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations,” the judge’s order further read.

Last week, ChatGPT had fooled a lawyer into believing that citations given by the AI chatbot in a case against Colombian airline Avianca were real while they were, in fact, bogus.

Lawyer Steven A. Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research.

After the opposing counsel pointed out the non-existent cases, US District Judge Kevin Castel confirmed that six of the submitted cases “appear to be bogus judicial decisions with bogus quotes and bogus internal citations”.

The judge set up a hearing as he considered sanctions for the plaintiff’s lawyers.

Last month, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

“ChatGPT recently issued a false story accusing me of sexually assaulting students,” Turley posted in a tweet.

ALSO READ-‘ChatGPT can perform analysis at fraction of human cost’