August 27th, 2024

Launch HN: Bucket Robotics (YC S24) – Defect detection for molded and cast parts

Bucket Robotics develops custom defect detection models for injection molding using CAD designs, aiming to improve efficiency and reduce human error in quality control within the $300 billion market.

Launch HN: Bucket Robotics (YC S24) – Defect detection for molded and cast parts

Bucket Robotics, founded by Matt and Steph, focuses on transforming CAD models into custom defect detection models for the manufacturing sector, particularly in injection molding. This process is crucial as injection molded parts constitute a significant portion of modern vehicles, with a defect rate that can reach 15% for minor blemishes. Traditional defect detection methods rely on machine learning or manual inspection, which can be time-consuming and less effective due to human fatigue. Bucket Robotics aims to streamline this by creating defect detection models based on CAD designs rather than real-world samples, allowing for quicker deployment before the molds are even completed. Their approach involves generating numerous variations of 3D models to simulate defects and then rendering photorealistic images for training a vision model. This model can run on standard hardware, ensuring that customer data remains on-site. The injection molding market is valued at approximately $300 billion, and as vehicle electrification grows, the demand for efficient defect detection will likely increase. The founders, with backgrounds in robotics and automation, are eager to engage with the community and seek connections in industrial computer vision and quality control.

- Bucket Robotics specializes in defect detection for injection molded parts using CAD models.

- Traditional defect detection methods are often inefficient and prone to human error.

- The company’s models can be ready before the physical molds are completed, saving time.

- The injection molding market is significant, with a defect rate that can reach 15%.

- Founders have extensive experience in robotics and are looking to connect with industry professionals.

Related

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating AI

MIT robotics pioneer Rodney Brooks cautions against overhyping generative AI, emphasizing its limitations compared to human abilities. He advocates for practical integration in tasks like warehouse operations and eldercare, stressing the need for purpose-built technology.

Charge Robotics (YC S21) is hiring full-stack devs to deploy solar robots

Charge Robotics (YC S21) is hiring full-stack devs to deploy solar robots

Charge Robotics, a Y Combinator-backed startup, seeks a Senior Software Engineer for their automated solar farm factory. Ideal for full-stack developers passionate about climate impact, offering growth opportunities and equity compensation.

Charge Robotics (YC S21) is hiring MechEs to build robots that build solar farms

Charge Robotics (YC S21) is hiring MechEs to build robots that build solar farms

Charge Robotics, a startup in Oakland, CA, is hiring a Senior Mechanical Engineer to design and test robotic systems for solar farm construction, requiring over six years of experience and CAD proficiency.

Fully-automatic robot dentist performs first human procedure

Fully-automatic robot dentist performs first human procedure

A fully-automatic robot dentist has successfully performed a dental procedure, enhancing precision and efficiency in dental care. Developed by Perceptive, it uses advanced imaging technology and aims for broader applications.

Charge Robotics (YC S21) is hiring MechEs to build robots that build solar farms

Charge Robotics (YC S21) is hiring MechEs to build robots that build solar farms

Charge Robotics, a Y Combinator-backed startup, seeks a Senior Mechanical Engineer to design and test robots for solar farm construction, offering equity compensation and growth opportunities in renewable energy.

Link Icon 9 comments
By @acyou - 8 months
Nice! I have so many questions.. How stable is the injection molding process once it's fully proven out, up and running? Is it a bathtub curve shape, do defects keep randomly popping up?

What do you use on your end to label the ejector pin locations, parting lines, etc? Does this process use Hexagon software inputs to make that easier?

If you're not relying so much on a skilled operator, would you be using a CMM for dimensional inspection anyways, and then would this be better solved with a CMM? How can you get quality parts if you don't have a skilled operator anyways to set up the machine correctly and correct the defects? Are you ever going to be able to replace a good machine operator? Or this just helps reduce the inspection toil and burden? Do they usually need 100% inspection, or just periodic with binning?

Why do you want to target injection molded parts and not machined parts?

Don't most of these machines have the parts just fall in a bin, with no robot arm? Doesn't this seem like instead of paying a good injection mold tech, now you're paying for an injection mold tech and a robotics tech, if you have to program the arm path for every part setup?

How many defects are "dimensional" and how many are "cosmetic" ?

Can a defect detection model accept injection mold pressure curves as input? Isn't that a better data source for flash and underfilling?

Is this supposed to get retrofit, or go on new machines?

By @a1rb4Ck - 8 months
Very cool. Good luck! I used to work on this. Your synthetic dataset pipeline is really neat. A foundation model of molding defects might be feasible. I hope you will also work on the whole inline quality control problem. From what I saw of the field, sometimes you only get the final quality days after painting, finishing or cool down of big parts. And the quality metric is notably undefined for visual defect, using the cad render as a reference is a good solution. Because plastic is so cheap and the process so stable, I have seen days of production shredded for a tiny perfectly repeated visual defects. Injection molding machines are heavily instrumented [0] and I tried to mix in-mold sensors + process parameters + photo + thermography of hot parts [1] (sry it's in french, might find better doc later). [0] https://scholar.google.com/citations?view_op=view_citation&h... [1] https://a1rb4ck.github.io/phd/#[128,%22XYZ%22,85.039,614.438...
By @jjk166 - 8 months
I'm an engineer at company that injection molds parts for medical and industrial devices. This seems extremely promising.

Can your scene generator handle things like custom tooling? For example if I were to place a part to be inspected on a clear acrylic jig, could the model be trained to look through the acrylic?

We're currently already using a vision system to measure certain features on the parts, can your models be applied to generic images, or does it require integration with the camera?

How does the customer communicate the types and probable locations of potential defects? Or do you perform some sort of mold simulation to predict them? Likewise how does the customer communicate where defects are critical versus non-critical?

Finally how does pricing work? Does it scale based on part size, or does the customer select how many variations or do you do some analysis ahead of time and generate a custom quote? Is it a one time cost or is it an ongoing subscription? Could you ballpark a price range for generating a model for a part roughly 3.5 inches in diameter and 1.5 inches tall with moderate complexity?

Feel free to reach out to the email in my profile if you'd like to discuss a little more in depth.

By @JofArnold - 8 months
Mechanical engineer turned software engineer here; I love this kind of stuff and I frequently wonder how I might apply my software expertise to that domain again. Amongst other things I worked in automotive and the components I worked on were forged and heat treated high strength steels. The defects in forged components are often very small (tens of microns) but I'd be curious if this could work there. We used powerful microscopes - including electron microscopes - on the production lines so maybe that would work?
By @chfritz - 8 months
Nice use case! Can you elaborate a bit more on robotics piece? What role does the robot play? I assume it's required to turn the part around for inspection. If so, how do you (automatically?) compute the grasping pointing? Also feel free to find me on LinkedIn if you want to chat more about growing a robotics businesses and/or geometric reasoning for manufacturing.
By @doctorpangloss - 8 months
This looks cool.

> Steph and I have a history of working...

I have so many questions, since you are experienced.

Do you think there should be import tariffs on Chinese made EVs?

I know your gut is telling you, don't answer this question, but that is like, the biggest and most important story in autos manufacturing, no? It would be like saying, if cars were extremely cheap, so that everyone could have one, the manufacturing story for free cars must already be sufficient, and so there isn't much demand for innovation to make things cheaper. But in real life, the thing that makes cars cheap or expensive is a law, which could disappear with a stroke of a pen, so it's interesting to get your POV.

> On the backend we’re generating...

OpenAI, Anthropic, Stability, etc. have already authored 3D model to synthetic data pipelines - why won't they do this one?

By @knicholes - 8 months
I apologize for such a naive comment, as I don't have experience in this field, but I've seen OpenAI do some pretty impressive image recognition tasks (multimodal LLMs). Have you tried uploading some images of successful injection castings and some of unsuccessful injection castings (they don't even have to be of the same mold), telling it "These are examples of success" "these are examples of failures, e.g. flashing, blemish, scratch, etc" and feeding it picture(s) of the casted object?

It'd be interesting to hear how effective that is.

By @edg5000 - 8 months
How would this compare against producing a 3D mesh using traditional photogrammetry and comparing the CAD model and mesh for deviations? Or would this be unrealistic since the photogrammetrically produced mesh would lack the level of detail required?
By @anomaly23 - 8 months
What is the difference between this and traditional FEA done on CAD models? Is this purely a cost benefit?