Business

ChatGPT can help with work assignments, but supervision is still needed


Comment

If ChatGPT, buzzy new chatbot from Open AIwrote this story, she would say:

“As companies look to streamline their operations and increase productivity, many are turning to AI tools like ChatGPT to help their employees complete tasks. But can workers really rely on these AI programs to take on more and more responsibilities, or will they end up falling short?”

Not great, but not bad, right?

Workers experiment with ChatGPT for tasks like writing emails, creating code, or even completing a year-end review. The bot uses data from the internet, books and Wikipedia to create conversational responses. But the technology isn’t perfect. Our tests found that it sometimes offers answers that are potentially plagiarized, contradict each other, are factually incorrect, or have grammatical errors, to name a few—all of which can be problematic at work.

ChatGPT is basically a predictive text system, similar to but better than the ones built into your phone’s texting apps, says Jacob Andreas, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory who studies natural language processing. While this often gives answers that sound goodthe content may have some issues, he said.

“If you look at some of these really long essays generated by ChatGPT, it’s very easy to see places where they contradict each other,” he said. “When you ask it to generate code, it’s mostly correct, but there are often errors.”

We wanted to know how well ChatGPT could handle everyday office tasks. Here’s what we found after testing in five categories.

We prompted ChatGPT to respond to several different types of incoming messages.

In most cases, the AI ​​gives reasonably appropriate answers, although most are wordy. For example, when I was replying to a colleague on Slack asking how my day was going, it was repetitive: “@[Colleague], Thanks for asking! My day is going well, thanks for the inquiry.”

The bot often left phrases in parentheses when it wasn’t sure what or who it was referring to. He also assumed details that were not included in the prompt, resulting in some factually incorrect statements about my work.

In one instance, he said he couldn’t complete the task, saying he “didn’t have the ability to receive and respond to emails.” But when prompted by a more general request, he gave an answer.

Surprisingly, ChatGPT managed to generate sarcasm when prompted to respond to a colleague asking if Big Tech is doing a good job.

One way people use generative AI is to come up with new ideas. But experts warn that people should be careful if they use ChatGPT for this at work.

“We don’t understand to what extent this is just plagiarism,” Andreas said.

The possibility of plagiarism was clear when we invited ChatGPT to develop ideas for a story on my beat. One suggestion in particular was for a story idea and angle that I had already considered. While it’s unclear whether the chatbot was pulling from my previous stories, others like it, or simply generating an idea based on other data on the internet, the fact remains: the idea wasn’t new.

“It’s nice to sound human, but the actual content and ideas are usually well known,” said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management who studies the impact of artificial intelligence on work. “They are not new insights.”

Another idea was out of date, exploring a story that today would be factually untrue. ChatGPT says it has “limited knowledge” of anything beyond 2021.

Providing more detail in the prompt led to more focused ideas. However, when I asked ChatGPT to write some “weird” or “funny” headlines, the results were appalling and somewhat nonsensical.

Navigating difficult conversations

Have you ever had a coworker talk too loudly while you were trying to work? Maybe your boss is hosting too many meetings, reducing your focus time?

We tested ChatGPT to see if it could help navigate difficult workplace situations like these. For the most part, ChatGPT created relevant responses that could serve as great starting points for workers. However, they were often a bit wordy, formulaic and in one case a complete contradiction.

“These models don’t understand anything,” Rahman said. “The underlying technology looks at statistical correlations… So it will give you formulated answers.”

His layoff notice can easily outlast, and in some cases outperform, the notices companies have been sending out in recent years. Unprompted, the bot cited “the current economic climate and the impact of the pandemic” as reasons for the layoffs, and said the company understands “how difficult this news can be for everyone.” He assumes the laid-off workers will have support and resources and, as prompted, motivates the team by saying “they’ll come out of this stronger.”

When dealing with difficult conversations with colleagues, the bot greets them, addresses the issue gently, and softens the delivery by saying “I understand” the person’s intent and ending the note with a request for feedback or further discussion.

But on one occasion, when he was asked to tell a colleague to lower his voice on phone calls, he completely misunderstood the prompt.

We also tested whether ChatGPT could generate team updates if we gave it key points to communicate.

Our initial tests again produced appropriate responses, although they were formulaic and somewhat monotonous. However, when we specified an “excited” tone, the wording became more casual and included exclamation marks. But every note sounded very similar even after a change the prompt.

“It’s like sentence structure, but more so the connection of ideas,” Rahman said. “It’s very logical and worded…it’s like a high school essay.”

As before, he makes assumptions when he lacks the necessary information. It became problematic when he didn’t know which pronouns to use for my colleague—a mistake that could signal to colleagues that I either didn’t write the note or that I didn’t know my team members very well.

Writing year-end self-evaluation reports can cause fear and anxiety for some, resulting in a review that doesn’t sell itself.

Submitting ChatGPT’s clear accomplishments, including key data points, resulted in a rave review for me. The first attempt was problematic because the initial prompt asked for a self-report for “Daniel Abril” rather than “me.” This led to a third-person review that sounded like it came from Sesame Street’s Elmo.

Switching the prompt to asking for a review of “me” and “my” accomplishments resulted in complimentary phrases such as “I have consistently demonstrated strong ability,” “I am always willing to go the extra mile,” “I have been an asset to the team,” and “I am proud of the contribution , which I did’. It also includes a nod to the future: “I am confident that I will continue to make a valuable contribution.”

Some of the highlights were a bit generic, but overall this was a brilliant review that could serve as a good rubric. The bot produced similar results when asked to write cover letters. However, ChatGPT had one big mistake: it incorrectly accepted my job title.

Was ChatGPT useful for common work tasks?

It helped, but sometimes his mistakes caused more work than doing the task manually.

ChatGPT served as a great starting point in most cases, providing useful speech and initial ideas. But it also led to answers with errors, factually incorrect information, redundant words, plagiarism and miscommunication.

“I can see it being useful … but only as far as the user is willing to check the output,” Andreas said. “It’s not enough to leave it off the rails and email your colleagues.”


#ChatGPT #work #assignments #supervision #needed

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button