Skip to main content


  • A GitHub survey reveals that 92% of U.S.-based developers use AI-based coding tools in their work.
  • AI-based coding tools, such as GitHub’s Copilot, are seeing increased competition from the likes of Google and Amazon.
  • Despite their popularity, concerns persist about the potential for these tools to encourage less secure coding practices.

In a recent survey conducted by GitHub, a significant majority of developers have reported the use of AI-based coding tools in their work and side projects. This reflects the rapid integration of such technologies into professional and personal coding environments.

Prevalence of AI Coding Tools

The paradigm of a solitary programmer has become antiquated as software development increasingly relies on collaboration, facilitated by AI-based assistants. The study, carried out by GitHub in partnership with Wakefield Research, shows that 92% of U.S.-based developers are utilizing AI coding tools in both their professional tasks and personal projects.

Collaboration effect developer experience-Github
Credits: Github

GitHub Copilot and its Competitors

Microsoft-owned GitHub offers one of the most widely used AI coding tools, GitHub Copilot, which recently integrated OpenAI’s GPT-4 model. Copilot is capable of explaining code, offering suggestions, and rectifying errors. However, competition in the realm of AI coding tools is escalating, with rivals such as Google, Amazon, Tabnine, and Replit offering their solutions.

Survey Demographics

The survey captured responses from 500 developers at enterprise companies in the U.S. The majority of the respondents were men in their 30s and 40s working in organizations with over 1,000 employees.

User Perception and Potential Pitfalls

Many respondents found that the tools enhanced their workflow and allowed for more focus on meaningful work. However, these tools also raise certain concerns, especially around code security. For instance, a Stanford University study found that users with access to an AI assistant wrote less secure code than those without, despite being more confident in their code’s security. As a result, it’s recommended that companies establish standards for the ethical and effective use of these tools.