Ending Ghosting & AI Bias in Hiring: Ethical Practices

 


A couple of years ago, I ran an experiment. I applied for the same role twice—once with full transparency, once with minimal input. The result? My résumé-only application triggered an automated invitation to interview with another A.I. system. My detailed, credential-rich submission was rejected in 15 minutes—without anyone reviewing the materials.

When I confronted the recruiter, the debate turned heated. We didn’t just disagree on AI’s readiness—we collided on what ethical hiring should look like. I declined the interview, not out of pride, but out of principle.

That moment confirmed what I’d long suspected: untested AI in hiring isn’t just inefficient—it’s ethically dangerous. It rewards opacity, penalizes authenticity, and erodes the trust candidates place in organizations.

Now, as I prepare for the Advanced A.I. Security Manager (AAISM) exam in November 2025, that lesson resonates more than ever. Ethical lapses in hiring—whether ghosting or blind faith in algorithms—aren’t technical glitches. They’re moral failures. This article targets one of the most common and corrosive examples: ghosting. It’s not just rude—it’s a violation of integrity. And in an era where AI is reshaping how we hire, it’s time for leaders to treat candidates as people, not disposable data points.

The Ethical Failure of Ghosting

Ethical leadership is about embodying values that inspire trust. I once worked under a leader whose integrity shone through when he refused to compromise on principles, even at personal cost. His commitment to transparency taught me that ethical leaders prioritize people over expediency. Ghosting candidates does the opposite—it dismisses their time, effort, and emotional investment. It signals a lack of accountability, eroding trust between employers and job seekers. Research shows that 53% of candidates have been ghosted by employers, with 61% experiencing post-interview silence, a trend that’s risen 9% since April 2024. This widespread practice not only frustrates candidates but also signals a lack of accountability, described as a sign of “unethical hiring practices or a toxic company culture.”

Ethical Perspectives on Ghosting

Ghosting can be analyzed through two ethical lenses: deontological and consequentialist. From a deontological perspective, decisions are guided by moral duties. Ghosting violates the duty to treat candidates with respect, as every individual deserves closure after engaging in the hiring process. A study found that ghosting harms candidates’ self-esteem and satisfaction of their psychological needs, reinforcing the moral obligation to communicate. A deontologist would argue that providing feedback, even if negative, is a matter of fairness and justice, regardless of the effort involved.

Consequentialists focus on outcomes. Ghosting may seem like a time-saver, but its long-term costs are significant. A 2023 LinkedIn study found that 70% of ghosted candidates are less likely to re-engage with an employer, damaging the organization’s reputation and talent pipeline. Additionally, 47% of HR professionals report complaints about ghost listings—jobs posted without intent to fill—further eroding trust. These consequences highlight that transparent communication serves the greater good by fostering goodwill and maintaining a positive employer brand.

Cognitive Moral Development and Hiring Decisions

Cognitive moral development offers insight into why ghosting persists and why it’s unethical. At the pre-conventional level, hiring managers might ghost to avoid uncomfortable conversations, driven by self-interest or fear of conflict. Research indicates that 69% of HR professionals admit to abandoning searches without notifying candidates, often due to overloaded schedules or fear of delivering bad news. At the conventional level, ghosting may align with a company culture where non-communication is normalized, but this still falls short of ethical standards. Ethical leaders operate at the post-conventional level, prioritizing universal principles like justice. They recognize that ghosting undermines trust and devalues candidates, choosing transparency to uphold fairness.

The ethical challenges of hiring extend beyond human decision-making to the growing use of AI, as my own experience with an AI-driven rejection process illustrates. The following section examines how AI can perpetuate or mitigate unethical practices, such as ghosting, guided by standards like NIST SP 1270.

AI in Hiring: Balancing Efficiency and Ethical Responsibility

The rise of artificial intelligence (AI) in hiring processes offers both opportunities and challenges for creating ethical, respectful candidate experiences. AI tools, such as applicant tracking systems (ATS) and predictive algorithms, promise efficiency by automating resume screening, candidate ranking, and even interview scheduling. However, without careful design and oversight, these tools can perpetuate biases and contribute to unethical practices, including ghosting. The National Institute of Standards and Technology’s (NIST) Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, provides a framework for understanding and mitigating bias in AI systems, which is critical for ethical hiring. By applying NIST SP 1270’s insights, organizations can address biases in AI-driven hiring and ensure transparency, fairness, and respect for candidates.

AI and the Risk of Bias in Hiring

NIST SP 1270 identifies three categories of bias in AI: statistical (or computational), systemic, and human. In hiring, statistical biases arise from unrepresentative datasets or algorithms that prioritize certain traits over others, such as favoring candidates with specific educational backgrounds or keywords in resumes, which may disproportionately exclude underrepresented groups. Systemic biases stem from institutional practices, such as job descriptions or evaluation criteria that reflect outdated or discriminatory norms. Human biases, meanwhile, can be embedded by developers or recruiters who unconsciously influence AI model design or interpret its outputs. My own experience with an AI-driven rejection, despite providing comprehensive qualifications, underscores how statistical biases in AI systems can lead to unfair outcomes, such as rapid rejections without human review, or even ghosting when feedback is not provided. A 2025 study found that 52% of job candidates believe AI screens their applications. Yet, only 26% trust AI to evaluate them fairly, highlighting the public’s concern about bias in AI-driven hiring processes.

These biases can exacerbate unethical practices, such as ghosting. For example, AI systems that automatically filter out candidates without notifying them can institutionalize ghosting at scale, leaving applicants without closure. This violates the deontological duty to treat candidates with respect, as outlined earlier, and has consequentialist impacts, such as damaging employer reputation. A 2023 LinkedIn study found that 70% of ghosted candidates are less likely to reapply, a risk that is amplified when AI systems lack transparency in their decision-making processes.

Applying NIST SP 1270 to Ethical AI Hiring

NIST SP 1270 emphasizes a socio-technical approach to managing AI bias, considering not just technical factors but also societal and human contexts. In hiring, this means designing AI systems that align with ethical principles of fairness, transparency, and accountability. The publication outlines three challenges for mitigating bias: datasets, testing and evaluation, and human factors. Organizations can address these by:

    Ensuring Representative Datasets: AI hiring tools must be trained on diverse, representative data to avoid statistical biases. For instance, if an ATS is trained primarily on resumes from a homogeneous group, it may unfairly screen out qualified candidates from underrepresented backgrounds. Regular audits of datasets can help identify and correct such biases.

    Robust Testing and Evaluation: NIST SP 1270 advocates for rigorous testing to detect biases in AI outputs. In hiring, this could involve simulating candidate evaluations to ensure AI systems do not disproportionately reject certain groups or fail to provide feedback. Testing can also identify if AI contributes to ghosting by flagging candidates who are filtered out without notification.

    Addressing Human Factors: Human biases in designing or deploying AI systems can perpetuate unfair practices. Training HR teams to recognize and mitigate their biases, as well as involving diverse stakeholders in AI development, can align systems with ethical standards.

By adopting NIST SP 1270’s guidance, organizations can reduce the risk of AI-driven bias and ensure candidates are treated with respect. For example, implementing automated rejection notices within AI systems can prevent ghosting, providing closure to candidates while maintaining efficiency. This aligns with the ethical duty to communicate transparently, as discussed in the deontological perspective.

Ethical AI as a Tool for Respectful Hiring

When designed responsibly, AI can enhance ethical hiring practices rather than undermine them. AI tools can standardize communication, such as sending timely updates or personalized rejection letters, reducing the likelihood of ghosting. A 2025 systematic review of AI in recruitment found that well-designed AI systems improve efficiency while maintaining fairness, provided they are regularly audited for bias. By integrating NIST SP 1270’s principles, organizations can create AI-driven hiring processes that prioritize candidate experience and uphold integrity.

However, ethical AI use requires more than technical fixes. It demands a cultural commitment to accountability, as outlined in NIST’s AI Risk Management Framework, which SP 1270 supports. Ethical leaders must ensure that AI tools reflect their organization's values, such as respect and fairness, and avoid shortcuts that prioritize efficiency over candidate dignity. This aligns with the post-conventional level of cognitive moral development, where universal principles guide decisions.

Moving Forward with Ethical AI in Hiring

To integrate AI ethically into hiring, leaders can expand on the steps outlined for eliminating ghosting:

    Audit AI Systems Regularly: Utilize NIST SP 800-127’s framework to assess AI tools for statistical, systemic, and human biases, ensuring they do not perpetuate unfair practices or contribute to bias in decision-making.

    Transparent AI Communication: Configure AI systems to provide clear, timely feedback to candidates, such as automated status updates or rejection notices, to uphold transparency.

    Train Teams on Bias Awareness: Educate HR professionals on NIST SP 1270’s categories of bias to foster informed decision-making when deploying AI tools.

    Engage Diverse Stakeholders: Involve diverse teams in AI system design and evaluation to minimize human biases and ensure inclusivity, as recommended by NIST.

By aligning AI use with NIST SP 1270’s guidance, organizations can transform hiring into a process that respects candidates and builds trust, mitigating the ethical failures of ghosting while addressing bias comprehensively.

Applying the Ethical Lens Inventory (ELI)

My Ethical Lens Inventory results highlight a preference for the responsibility lens, rooted in rationality and mild autonomy. This lens drives me to make reasoned, principled decisions, considering the impact on others. Ghosting conflicts with this approach, as it neglects the careful evaluation of how actions affect candidates’ experiences. My classical virtue of prudence—making wise, contextual decisions—further underscores the need for communication that minimizes harm and upholds integrity. Reflecting on past blind spots, like bypassing processes for quick wins, I’ve learned the value of self-awareness. Ghosting is a similar shortcut, tempting under pressure but misaligned with ethical values. As my father advised, “The world judges you by your actions, not your intentions.”

Transforming Hiring Practices

To eliminate ghosting and build ethical hiring practices, leaders can adopt these steps:

    Cultivate Self-Awareness: Recognize the urge to ghost as a shortcut and ask, “Would I want to be treated this way?” Align actions with ethical principles.

    Trust Your Instincts: If ghosting feels wrong, it likely is. Pause and choose transparency over convenience.

    Standardize Communication: Implement automated responses or rejection templates to ensure timely closure, as suggested by HR experts. When using AI tools, configure them to deliver transparent feedback, aligning with NIST SP 1270’s emphasis on mitigating bias through clear communication.

    Promote an Ethical Culture: Model transparent communication and mentor teams to prioritize candidate experience, embedding respect into the hiring process.

    Seek Candidate Feedback: Invite input on the hiring experience to identify gaps, aligning with principles of fairness and continuous improvement.

    Leverage Ethical AI Design: Use AI tools designed with standards like NIST SP 1270 to ensure fairness and transparency, such as automating candidate updates to prevent ghosting, while regularly auditing systems for bias.

A Call for Ethical Hiring

Ghosting candidates is unethical because it disregards their dignity, harms organizational reputation, and undermines trust. Similarly, using AI in hiring without addressing biases, as outlined in NIST SP 1270, risks perpetuating unfair practices. Ethical hiring practices, whether human- or AI-driven, reflect the integrity we expect in leadership—treating every candidate with respect, regardless of the outcome. By prioritizing transparency and accountability, we can develop hiring processes that attract top talent and foster trust.

Have you experienced ghosting or unfair AI-driven hiring processes as a candidate or employer? What steps are you taking to ensure ethical hiring practices in your organization? Share your insights—let’s work together to make hiring a model of respect and integrity.

Originally published on LinkedIn: September 7, 2025




Comments