LLMs trained on garbage internet info

"They are just trained on too much garbage information on the internet!" I was exploring a relatively vague concept in programming, specifically the JavaScript/TypeScript ecosystem around type-safety, and static and dynamic types, and was asking some frontier LLMs about it. I wanted to know what kind of information they're trained on. Here are the results in the screenshots I've provided below. As you can see, their answer is good but not complete, because type-safety at interpretation time is not equal to total type-safety in the program. Type-safety is simply the alignment between the type of something at runtime and interpretation or compile time. What matters is run-time, because if you annotate a type of string in your program, yet somehow pass a number or an object into it, then your program will have a big bug. Although with the extra help of TypeScript, we can make sure that a lot of these mistakes won't happen but there are some use cases, such as Forms, API responses, and 3rd party libs, that if you don't strictly check run-time types and validate the data pouring into your code, you will end up with bugs. My point here is not to give lecture about TypeScripy but to set an alarm for you if you're excessively relying on LLMs to learn, this may not be a good resource because there are just too much garbage information on the internet that LLMs will throw at you confidently because they're just token machines, not actually human-level of intelligence to determine whether what they're saying is correct or not. #typescript #javascript #frontend #llm #learn #ai #llms #intelligence #token #generation #code #program #programming #type_safety #lecture #api #form #data #information

  • type-safety

I think this hits an important point that often gets lost. LLMs are decent at describing concepts, but they tend to blur critical distinctions. In TypeScript, “type safety” at compile time is not the same as correctness at runtime. The gap is exactly where bugs happen: external inputs, APIs, forms, third-party code. If you treat LLM output as authoritative instead of as a starting point, you risk internalizing incomplete models. Tools don’t replace reasoning. They just accelerate it when you already know what to question. Strong reminder to validate assumptions, not just types. Strong reminder to learn from other humans, not just LLMs.

Pretty sure if you asked that same question a human they wouldn’t go as much in depth as the nuance you provided Max. Prompting matters both for llms and humans. Had the answer been incorrect instead of incomplete that would have been an issue.

Using validation libraries like zod that indeed assert input types, I’d say TypeScript provides a solid type-safety both at compile and runtime.

Great points! LLMs are amazing starting points, but they can’t replace deep, intentional learning from docs, real-world debugging, and hands-on experience. Always pair their suggestions with your own reasoning and validation.

See more comments

To view or add a comment, sign in

Explore content categories