Skip to main content

(DAY 897) Communication gaps and missed deadlines in AI era

· 4 min read
Gaurav Parashar

Expectations with salaries hardly ever deal with figures only. It's an amalgam of financial requirements, personal benchmarks, market conditions, and value within the company. For instance, employees tend to develop views based on the combination of historical salary increments, inflation, and industry averages. Previously, most of these inputs were gathered from classmates, from professional recruiters, or an organized professional circle. This has changed with the new boom of LLMs (large language models) which allows for an easy generation of salary expectations based on massive datasets, fetched texts, and even estimates. This has the advantage that more people using AI to corroborate their salary expectations. However, the quality control for these estimates is very low or untested. While LLMs shine at giving well-structured and confident outputs, that is very very far from the reality of most company budgets, internal organization, or corporate compensation culture.

The biggest problem stems from the way people understand salary figures AI provides. LLMs have the capability of generating figures that may sound reasonable but are the result of averaging across locations, roles, levels of seniority, fields, and more, resulting in either optimistic or pessimistic figures. Since these models do not work with verified salary databases and instead with patterns in text, they are at the mercy of biased, outdated, or unreliable texts. LLM outputs are not grounded in reality and can include outdated, biased, or simply inaccurate information. One party may think the figure given is authoritative, while the other party is aware that the number does not apply to that role. This discrepancy can take what ought to be a simple negotiation and make it a difficult conversation because both sides are starting from completely different starting points. The lack clarity stems from a lack of how the information was gathered, not bad intentions.

Managing raises expectations rooted in AI technology becomes a burdensome responsibility for managers. Trust can be harmed as conversations are avoided or data is dismissed. Walk away from the conversation and trust is lost. Give too much information on the internal processes and trust is lost too. Trust can be built or eroded with salary decisions. AI tools are increasingly common but acknowledgement of their generalizations helps. AI errors can be generalizations; admitting to inaccuracies helps employees feel heard. Number validation is not the goal. Dialogue fueled by clarity is better when free of defensiveness. AI determinism is not the goal. Trust can be built with the right tone.

From an employee’s perspective, treating information generated by LLMs as a starting point instead of a conclusion holds merit. While AI tools can showcase emerging trends and highlight midpoints, they disregard the specifics of a person's role, contribution, and the overall company context. AI can offer some insight, but it should be augmented with recruiter, industry, and HR conversations for a fuller picture. The problem is putting too much weight on a single figure, particularly one generated by an algorithm with no transparent methodology. During salary negotiations, focusing on the company’s point of view usually results in better long-term value than fixating on an externally determined number.

As of now, both employees and employers are trying to make sense out of the overlap created by AI suggestions and salary expectations. LLMs are great for collecting information, but they do not specialize in producing truths related to a specific company. There will continue to be gaps in understanding until both parties make an effort to provide the necessary context and discuss the right framework before numbers are laid on the table. Transparency as an AI concept revolves not just on numbers, but reasoning and decision making processes which led to them. The more this becomes a culture in the workplace, the more unlikely tensions caused by AI-informed salary expectations will arise.