June 21st, 2024

Testing Generative AI for Circuit Board Design

A study tested Large Language Models (LLMs) like GPT-4o, Claude 3 Opus, and Gemini 1.5 for circuit board design tasks. Results showed varied performance, with Claude 3 Opus excelling in specific questions, while others struggled with complexity. Gemini 1.5 showed promise in parsing datasheet information accurately. The study emphasized the potential and limitations of using AI models in circuit board design.

Read original articleLink Icon
Testing Generative AI for Circuit Board Design

In a recent study, Large Language Models (LLMs) like GPT-4o, Claude 3 Opus, and Gemini 1.5 were tested for their effectiveness in designing circuit boards. The focus was on their utility in various design tasks such as building skills, writing code, and extracting data from datasheets. The study aimed to push the boundaries of AI assistance for expert human circuit board designers. Results showed that while Claude 3 Opus performed well in answering specific questions, other models struggled with complex tasks like finding suitable parts for a circuit. The models generally lacked a deep understanding of application-specific considerations, leading to suboptimal recommendations. Additionally, the study explored using LLMs to parse information from datasheets, with Gemini 1.5 showing promise in accurately extracting detailed data like pin tables. Overall, the study highlighted both the potential and limitations of using generative AI models for circuit board design tasks.

Related

GitHub – Karpathy/LLM101n: LLM101n: Let's Build a Storyteller

GitHub – Karpathy/LLM101n: LLM101n: Let's Build a Storyteller

The GitHub repository "LLM101n: Let's build a Storyteller" offers a course on creating a Storyteller AI Large Language Model using Python, C, and CUDA. It caters to beginners, covering language modeling, deployment, programming, data types, deep learning, and neural nets. Additional chapters and appendices are available for further exploration.

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

You can easily set up and run large language models (LLMs) on your PC using tools like Ollama, LM Suite, and Llama.cpp. Ollama supports AMD GPUs and AVX2-compatible CPUs, with straightforward installation across different systems. It offers commands for managing models and now supports select AMD Radeon cards.

Delving into ChatGPT usage in academic writing through excess vocabulary

Delving into ChatGPT usage in academic writing through excess vocabulary

A study by Dmitry Kobak et al. examines ChatGPT's impact on academic writing, finding increased usage in PubMed abstracts. Concerns arise over accuracy and bias despite advanced text generation capabilities.

Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]

Francois Chollet – LLMs won't lead to AGI – $1M Prize to find solution [video]

The video discusses limitations of large language models in AI, emphasizing genuine understanding and problem-solving skills. A prize incentivizes AI systems showcasing these abilities. Adaptability and knowledge acquisition are highlighted as crucial for true intelligence.

Detecting hallucinations in large language models using semantic entropy

Detecting hallucinations in large language models using semantic entropy

Researchers devised a method to detect hallucinations in large language models like ChatGPT and Gemini by measuring semantic entropy. This approach enhances accuracy by filtering unreliable answers, improving model performance significantly.

Link Icon 26 comments
By @bottlepalm - 5 months
It'd be interesting to see how Sonnet 3.5 does at this. I've found Sonnet a step change better than Opus, and for a fraction of the cost. Opus for me is already far better than GPT-4. And same as the poster found, GPT-4o is plain worse at reasoning.

Edit: Better at chain of thought, long running agentic tasks, following rigid directions.

By @HanClinto - 5 months
This feels like an excellent demonstration of the limitation of zero-shot LLMs. It feels like the wrong way to approach this.

I'm no expert in the matter, but for "holistic" things (where there are a lot of cross-connections and inter-dependencies) it feels like a diffusion-based generative structure would be better-suited than next-token-prediction. I've felt this way about poetry-generation, and I feel like it might apply in these sorts of cases as well.

Additionally, this is a highly-specialized field. From the conclusion of the article:

> Overall we have some promising directions. Using LLMs for circuit board design looks a lot like using them for other complex tasks. They work well for pulling concrete data out of human-shaped data sources, they can do slightly more difficult tasks if they can solve that task by writing code, but eventually their capabilities break down in domains too far out of the training distribution.

> We only tested the frontier models in this work, but I predict similar results from the open-source Llama or Mistral models. Some fine tuning on netlist creation would likely make the generation capabilities more useful.

I agree with the authors here.

While it's nice to imagine that AGI would be able to generalize skills to work competently in domain-specific tasks, I think this shows very clearly that we're not there yet, and if one wants to use LLMs in such an area, one would need to fine-tune for it. Would like to see round 2 of this made using a fine-tuning approach.

By @roody15 - 5 months
It makes me think of the saying “a jack of all trades a master of none”.

I cannot help but think there are some similarities between large model generative AI and human reasoning abilities.

For example if I ask a physician with a really high IQ some general questions about say anything like fixing shocks on my mini van … he may have some better ideas than me.

However he may be wrong since he specialized in medicine, although he may have provided some good overall info.

Let’s take a lower IQ mechanic who has worked as a mechanic for 15 years. Despite this human having less IQ, less overall knowledge on general topics … he gives a much better answer of fixing my shocks.

So with LLM AI fine tuning looks to be key as it is with human beings. Large data sets that are filtered / summarized with specific fields as the focus.

By @dindobre - 5 months
Using neural networks to solve combinatorial or discrete problems is a waste of time imo, but I'd be more than happy if somebody could convince me of the opposite.
By @cjk2 - 5 months
Ex EE here

> The AI generated circuit was three times the cost and size of the design created by that expert engineer at TI. It is also missing many of the necessary connections.

Exactly what I expected.

Edit: to clarify this is even below the expectations of a junior EE who had a heavy weekend on the vodka.

By @sehugg - 5 months
By @guidoism - 5 months
This reminds me of my professor's (probably very poor) description of NP-complete problems where the computer would provide an answer that may or may not be correct and you just had to check that it was correct and you do test for correctness in polynomial time.

It kind of grosses me out that we are entering a world where programming could be just testing (to me) random permutations of programs for correctness.

By @rkagerer - 5 months
Any discussion of evolved circuits would be incomplete without mentioning Dr. Adrian Thompson's pioneering work in the 90's:

https://www.damninteresting.com/on-the-origin-of-circuits/

By @seveibar - 5 months
I work on generative AI for circuit board design with tscircuit, IMO it's definitely going to be the dominant form of bootstrapping or combining circuit designs in the near future (<5 years)

Most people are wrong that AI won't be able to do this soon. The same way you can't expect an AI to generate a website in assembly, but you CAN expect it to generate a website with React/tailwind, you can't expect an AI to generate circuits without having strong functional blocks to work with.

Great work from the author studying existing solutions/models- I'll post some of my findings soon as well! The more you play with it, the more inevitable it feels!

By @al2o3cr - 5 months
TBH the LLM seems worse than useless for a lot of these tasks - entering a netlist from a datasheet is tedious, but CHECKING a netlist that's mostly correct (except for some hallucinated resistors) seems even more tedious.
By @kristopolous - 5 months
Just the other day I came up with an idea of doing a flatbed scan of a circuit board and then using machine learning and a bit of text promoting to get to a schematic

I don't know how feasible it is. This would probably take low $millions or so of training, data collection and research to get not trash results.

I'd certainly love it for trying to diagnose circuits.

It's probably not really that possible even at higher end consumer grade 1200dpi.

By @amelius - 5 months
Can we have an AI that reads datasheets and produces Spice circuits? With the goal of building a library of simulation components.
By @shrubble - 5 months
Reminds me of this, an earlier expert-system method for CPU design, which was not used in subsequent designs for some reason: https://en.wikipedia.org/wiki/VAX_9000#SID_Scalar_and_Vector...
By @MOARDONGZPLZ - 5 months
Author mentions prompting techniques to get better results, presumable “you are an expert EE” or “do this and you get a digital cookie” are among these. Can anyone point me to non-SEO article that outlines the latest and greatest in the promoting techniques domain?
By @cushychicken - 5 months
I'm terrified that JITX will get into the LLM / Generative AI for boards business. (Don't make me homeless, Duncan!)

They are already far ahead of many others with respect to next generation EE CAD.

Judicious application of AI would be a big win for them.

Edit: adding "TL;DRN'T" to my vocabulary XD

By @amelius - 5 months
The whole approach reminds me of:

https://gpt-unicorn.adamkdean.co.uk/

By @ncrmro - 5 months
I had it generate some opencad but never looked into it further.
By @Terr_ - 5 months
To recycle a rant, there's a whole bunch of hype and investor money riding on a very questionable idea here, namely:

"If we make a really really good specialty text-prediction engine, it could be able to productively mimic an imaginary general AI, and if it can do that then it can productively mimic other specialty AIs, because it's all just intelligence, right?"

By @teleforce - 5 months
Too Lazy To Click (TLTC):

TLDR: We test LLMs to figure out how helpful they are for designing a circuit board. We focus on utility of frontier models (GPT4o, Claude 3 Opus, Gemini 1.5) across a set of design tasks, to find where they are and are not useful. They look pretty good for building skills, writing code, and getting useful data out of datasheets.

TLDRN'T: We do not explore any proprietary copilots, or how to apply a things like a diffusion model to the place and route problem.

By @blueyes - 5 months
By @djaouen - 5 months
Sure, this will end well lol
By @AdamH12113 - 5 months
The conclusions are very optimistic given the results. The LLMs:

* Failed to properly understand and respond to the requirements for component selection, which were already pretty generic.

* Succeeded in parsing the pinout for an IC but produced an incomplete footprint with incorrect dimensions.

* Added extra components to a parsed reference schematic.

* Produced very basic errors in a description of filter topologies and chose the wrong one given the requirements.

* Generated utterly broken schematics for several simple circuits, with missing connections and aggressively-incorrect placement of decoupling capacitors.

Any one of these failures, individually, would break the entire design. The article's conclusion for this section buries the lede slightly:

> The AI generated circuit was three times the cost and size of the design created by that expert engineer at TI. It is also missing many of the necessary connections.

Cost and size are irrelevant if the design doesn't work. LLMs aren't a third as good as a human at this task, they just fail.

The LLMs do much better converting high-level requirements into (very) high-level source code. This make sense (it's fundamentally a language task), but also isn't very useful. Turning "I need an inverting amplifier with a gain of 20" into "amp = inverting_amplifier('amp1', gain=-20.0)" is pretty trivial.

The fact that LLMs apparently perform better if you literally offer them a cookie is, uh... something.

By @surfingdino - 5 months
Look! You can design thousands of shit appliances at scale! /s