July 9th, 2024

Synthetic User Research Is a Terrible Idea

Synthetic User Research criticized by Matthew Smith for bias in AI outputs. Using AI may lead to generalized results and fabricated specifics, hindering valuable software development insights. Smith advocates for controlled, unbiased research methods.

Read original articleLink Icon
Synthetic User Research Is a Terrible Idea

Synthetic User Research is criticized in a blog post by Matthew Smith from Atomic Object. The article highlights the issue of bias in AI outputs and the limitations of using synthetic users or generative AI for research purposes. Smith argues that relying on AI for research can lead to generalized results and potentially fabricated specifics, which may not provide valuable insights for software development. He emphasizes the importance of controlling research methods to minimize bias and ensure the quality of data gathered. The post warns against investing time and money in AI research that may only offer generalized solutions, suggesting that the last 20% of effort is crucial for project success. Smith questions the reliability of AI-generated needs and urges companies to consider the risks of basing software projects on such outputs. The blog post raises concerns about the effectiveness and ethical implications of using AI for user research, advocating for a more thoughtful and controlled approach to gathering insights for software development.

Link Icon 1 comments
By @jschveibinz - 3 months
Synthetic user research creates some interesting philosophical questions. However, despite these questions I believe that AI can indeed be a very useful tool in modeling how humans will respond with predictable variances. AI synthetic user data is a way to determine how useful the actual user research is. There will be a lot of advancements in market research as a result of AI and LLM tools.