Tell us about your project

Collaboration with an artificial intelligence interface, symbolizing human-AI synergy and social transparency.
Collaboration with an artificial intelligence interface, symbolizing human-AI synergy and social transparency.
Image of Jonathan Bavay
Jonathan Bavay
5 min lecture 26 February, 2026

AI is Not Just a Tool, It's Your New Teammate: From Social Influence to Transparency

At Nexapp, we often say that technology should serve people. Since foundation models (LLMs) made their way into our development environments, the conversation in the software development world has mostly revolved around performance gains: how much code can be generated? How fast?

But when we look at recent research and our own internal experiments, we realize that integrating AI is a fundamental shift in team dynamics. Artificial intelligence is no longer just a passive tool; it is becoming an artificial teammate that exerts real social influence on developers.

For a Human-AI team to perform effectively without creating technical debt or cognitive debt, we need to understand how this collaboration actually works. What are the forces at play that could hinder this collaboration? Or better yet, strengthen it?

 

An Invisible But Rapid Adaptation

Whether we like it or not, our behaviours change through contact with AI. This is what research calls normative social influence. Studies show that humans instinctively adjust their work style to become complementary to that of AI.

A study from Clemson University (Flathmann et al., 2024) shows that we quickly adapt our role to become complementary to an AI: in a game of Rocket League, when facing an "aggressive" AI, the human pulls back; when facing a more cautious AI, the human takes the lead.

In software development, it's the same thing. This "aggressiveness" often comes from the framework we set for the AI. If we let it drive the changes and simply respond to its questions, we shift into a defensive/reactive posture: we make minor corrections and validate afterward. This can be very effective for a proof of concept.

But for critical code, we want the opposite: take the lead upstream (specs, constraints, tests, task breakdown) and force the AI to operate under our control.

And that's great news! We're wired to collaborate, but above all, to adapt. Darwin certainly wouldn't disagree!

But beware! The study also shows that if the AI is perceived as disruptive, if it "steals the ball" or breaks the flow for no reason, humans tend to give up, disengage, and let the machine carry on without them. To hand over control.

 

Letting Go Without Dropping The Ball

For a Human-AI team to work, the human must feel a sense of control. It's paradoxical, but to accept delegating a complex task to an AI, a developer needs to know they can "pull the plug" or veto at any time.

As our colleague Dominique Richard put it during an internal discussion: “The quality of your AI's code is only as good as the developer directing it.” If the developer loses control over what the AI generates, we fall into a spiral in which the AI becomes an incompetent teammate, producing mediocre code at best. Or worse, bad and dangerous.

Tip: Establish clear veto mechanisms. The AI proposes; the human decides. This is the foundation for avoiding what researchers call "cognitive deskilling", the loss of skills through over-delegation. Or, put differently, AI dependency.

 

The Comfort Trap of Dependency

This is the elephant in the room. When you have an ultra-competent copilot handling information analysis and decision support, you gradually slide into cognitive dependency. At first, it's magical: less mental load, more speed. But the long-term risk is skill atrophy.

It's the GPS syndrome: by blindly following the blue line, you eventually lose the ability to navigate your own city (or your own software architecture). This opens the door wide to automation bias.

GPS showing directions on a smartphone in a moving car.

 

In the context of software development, this bias is a formidable cognitive trap: the AI generates code that appears competent (perfect syntax, clean formatting, architecture that makes sense), leading us to confuse plausibility with correctness. Faced with a solution that “sounds right”, the developer lets their guard down and hits “Enter” as a validation reflex rather than out of logical conviction. The more we accept without questioning, the stronger the reflex becomes. And the more we lose the mental muscle needed to solve complex problems when the machine fails.

To counter this, it is essential to put the human back in the loop. Not by slowing down the work, but by changing the nature of the interaction with the tool: moving from blind trust to critical collaboration.

 

Beyond Code: Explainability vs. Social Transparency

This is where most teams get it wrong. We often think that it's enough for the AI to explain how it arrived at an answer for us to trust it. This is what's called technical explainability.

But to collaborate effectively, technical explainability is not enough. Your teams need social transparency.

What's the difference?

  • Technical explainability (How): The AI tells you: I generated this code based on source X with a confidence level of Y%.” It's useful, but it doesn't say much about relevance and intent.
  • Social transparency (Why): The AI contextualizes its action: “I refactored this module to comply with the project's hexagonal architecture, but I didn't touch the database because I don't have the access required by security standards.”

Social transparency is one of the elements that modulates developers' trust level in AI while leaving room for their critical thinking.

 

When Doubt Adds Value

The goal of social transparency is not to maximize blind trust (which leads to costly errors or automation bias), but to cultivate healthy skepticism. By explaining its process and intentions, the AI enables the developer to know exactly when they need to intervene.

Asking the AI to generate a plan or diagrams before or during the implementation phase is a form of developer-driven social transparency: it forces the AI to expose its understanding of the “flow” and business rules before executing the task. That's what healthy skepticism looks like.

 

Treating AI Like a New Player

How do you apply these principles on Monday morning? Even though AI can't truly be considered a team member, a recommended approach is to treat it as a new hire and train and supervise it.

 

“Shadowing” Before Hiring

Before deploying an AI agent at scale, let your teams observe it in action or experiment on pilot projects. It's like a technical interview for your AI: you analyze how it behaves to best integrate it into the team.

 

Code Well to Lead Well

AI doesn't truly learn in the human sense. It needs fresh context. Keep your architecture clean, your code clear, and force the AI to refer to your documentation, your “source of truth”, whether that's documents or test suites; it doesn't matter. This prevents it from hallucinating or degrading code quality by re-reading its own output. Just like a new team member needs clear context before diving in.

 

Validate the Intent

Use social transparency. When a new member joins the team, it's in everyone's best interest for them to share their thought process to promote alignment among team members.

 

Stay in the Driver's Seat

The future of software development is a hybrid collaboration where AI exerts an undeniable social influence. It can boost productivity or lead to massive technical debt if poorly managed.

During this transition, leadership plays a decisive role. Leaders aren't mere spectators: they are the architects of this human-AI collaboration. Their responsibility is twofold.

First, choose the right tools. Not all AIs are created equal, and not all are suited to your context. Don't pick the ones making the most noise on LinkedIn. Striving not to be dependent on a single vendor is also a worthwhile ideal for maintaining control.

Then, facilitate the transition from tool to teammate. AI doesn't explicitly communicate its limitations. It's up to the leader to cultivate the necessary transparency and help the team develop a calibrated trust: neither blind nor suspicious.

In practical terms, this means staying informed about technological advancements and, above all, creating the conditions for humans and intelligent agents to understand each other and coordinate effectively.

The difference between success and failure lies in governance. Don't let AI become an opaque executor. Use it to augment your capabilities, but make sure developers keep their hands on the wheel (and their eyes on the road). Developers' accountability for their code has never been more relevant.

After all, AI only repeats what it has seen.
Let's make sure we set the example of excellence.

 


 

Sources