He created Python in 1991. The language that powers 70% of AI today. TensorFlow. PyTorch. NumPy. All Python. And here's what he thinks about AI: "I'm definitely not looking forward to an AI-driven future." This is Guido van Rossum. Creator of Python. Still writing code at 69. He uses AI every single day. But his role has shifted: "Instead of writing code, I've moved to the position of a code reviewer." His concern isn't robots taking over. It's something more real: "Too many people without ethics getting the ability to do much more." And on AI-generated code: "Code still needs to be read and reviewed by humans. Otherwise we risk losing control entirely." Three legends. Three weeks. One conclusion: Uncle Bob: AI increases demand for programmers. DHH: AI amplifies the strong, exposes the weak. Guido: AI without human oversight is dangerous. 🔥 Bonus - Uncle Bob posted this yesterday: https://lnkd.in/ddMDt-4x 27 years ago Kent Beck said "Refactor Mercilessly." Now with Claude, "merciless" takes on a new meaning. He's ripping systems apart and rebuilding them at will. Massive TDD + Gherkin acceptance tests keep everything stable. The tests are so thorough that Claude can't break free. Same Uncle Bob. New tools. Same discipline. The fundamentals have never mattered more. Save this if you're following this series. Drop a comment: are you still reviewing every line AI writes - or do you trust it blindly? #Python #AI #Programming #GuidoVanRossum #UncleBob #SoftwareDevelopment
Guido van Rossum on AI, Code Review & Ethics
More Relevant Posts
-
🚀 PYTHON No other language has been adopted this broadly, this fast. Not because Python is the fastest language. Not because it has the cleanest syntax debates. But because it meets people where they are — and the ecosystem around it is unmatched. Think about what a single AI project touches today: → Data preprocessing with NumPy, Pandas, Polars → ML frameworks like Scikit-learn, XGBoost, LightGBM → Deep learning with PyTorch, TensorFlow, JAX, Keras → Experiment tracking through MLflow, Weights & Biases, Comet ML → Visualization using Matplotlib, Seaborn, Plotly, Altair → Model serving via FastAPI, BentoML, Gradio, Streamlit → MLOps and orchestration with Airflow, Prefect, Kubeflow, Dagster → Feature engineering using Featuretools, tsfresh, Category Encoders → Model validation through Evidently AI, Deepchecks, Great Expectations → Data security with Presidio, PySyft, OpenMined That's 40+ battle-tested libraries across 10 categories — all in one language. Python didn't win because of hype. It won because practitioners chose it, day after day, project after project. If you're building in AI today, Python isn't optional. It's infrastructure. What Python tool has had the biggest impact on your workflow? Drop it below.
To view or add a comment, sign in
-
-
Ever find your Python script chugging, or even crashing, when dealing with massive AI/ML datasets? 😩 Traditional list comprehensions are great, but they load *everything* into memory at once. For gigabytes of data or features, that's a recipe for disaster! Enter Python's generator expressions. ✨ They're like list comprehensions' super-efficient sibling. Instead of building a full list in memory, they yield items one by one, only when requested. This "lazy" evaluation is a game-changer for memory-intensive tasks in machine learning and deep learning, like processing large embedding files, log datasets, or synthetic data streams. Imagine you're processing millions of data points to extract features. A list comprehension would try to hold all processed features in memory. A generator expression? It processes one, yields it, and then moves to the next, keeping your RAM happy and your training loops smooth. It's a simple syntax change with massive performance implications! How do you handle memory when working with huge datasets in your AI/ML projects? Share your tricks below! 👇 #Python #AIML #MachineLearning #DataScience #PythonTips #MemoryEfficiency
To view or add a comment, sign in
-
-
I shipped an open-source Python package this week, addressing a long-standing frustration with AI agents. Many major frameworks today, including LangChain, smolagents, and CrewAI, default to ReAct. While this approach works for simple tasks, it struggles with multi-step reasoning as it commits to a single path and lacks the ability to backtrack. To tackle this issue, I implemented LATS (Language Agent Tree Search) for smolagents. This approach utilizes a tree search instead of a single reasoning chain: - Generates multiple candidate actions at each step - Scores and selects branches using UCT (the algorithm behind AlphaGo) - Writes a self-critique on every failed path, allowing sibling branches to learn from mistakes The outcome is an agent that explores its options before making a commitment. A key lesson from this project is that the reflection mechanism is crucial. Failed branches are not discarded; they enhance the overall search process. You can install it using pip: pip install smolagents-lats. A full write up is available on Medium (link in comments). I welcome feedback from anyone working on agent reliability. #AI #MachineLearning #OpenSource #Python #LLMAgents #HuggingFace
To view or add a comment, sign in
-
-
Andrej Karpathy’s 630-line Python script ran 50 experiments overnight without any human input Andrej Karpathy's AutoResearch ran 50 AI experiments overnight on one GPU. The design pattern behind it applies far beyond ML training. Here's how it works. On the night of March 7, Andrej Karpathy pushed a 630-line Python script to GitHub and went to sleep. By morning, his agent had run 50 experiments, discovered a better learning rate, and committed the proof to git without a single human instruction in between. The story making the rounds is about autonomous machine learning (ML) research. But the more important story is about the design pattern underneath it, and the 40-line Markdown file that made the whole thing work. The patterns in AutoResearch mirror methodology that any laboratory scientist would recognize. A fixed experimental protocol. A single variable under test. An objective measurement criterion. A keep-or-discard decision at the end of each run. A lab notebook that bridges the scientist’s intent and the instrument’s execution. This article extracts the three primitives that make the loop generalizable and shows why the shift from code to structured prose as the human-agent interface is a development worth paying attention to. https://lnkd.in/eDDyCVNU Please follow Divye Dwivedi for such content. #DevSecOps,#SecureDevOps,#CyberSecurity,#SecurityAutomation,#CloudSecurity,#InfrastructureSecurity,#DevOpsSecurity,#ContinuousSecurity, #SecurityByDesign, #SecurityAsCode, #ApplicationSecurity,#ComplianceAutomation,#CloudSecurityPosture, #SecuringTheCloud,#AI4Security #DevOpsSecurity #IntelligentSecurity #AppSecurityTesting #CloudSecuritySolutions #ResilientAI #AdaptiveSecurity #SecurityFirst #AIDrivenSecurity #FullStackSecurity #ModernAppSecurity #SecurityInTheCloud #EmbeddedSecurity #SmartCyberDefense #ProactiveSecurity
To view or add a comment, sign in
-
Da5/20 .This challenges really forces you to be a writer well at least I have something to write about about AI and Machine Learning being based in Python,well a lot has been done for day 5 including basic Python skills,moving to how AI and ML works from the very first variable , control loops and decision making . I would like to think of it in my own terms . A model being just like a baby being trained and the baby 's behavior depends on the kind of education or information given to it by the guardian then there comes the simple garbage in garbage out term that we always hear about in ML it all depends on your data quality,with the quality depending on how you as the model developer you clean and analyse your data .The basics of it all .. as simple as it is and with this #Africaagility course and this challenge I am sure most of the people are tempted to use AI to write their posts but what will be the use of it then ..I think originality and writing in one's own terms and getting corrected as you write makes it all worth it.With that being said I can summarize some of my knowledge that way.#machinelearning #ArtificialIntelligence
To view or add a comment, sign in
-
Machine Learning Text Data using snips nlu #machinelearning #datascience #textdata #snipsnlu Snips NLU is a Natural Language Understanding python library that allows to parse sentences written in natural language, and extract structured information. It’s the library that powers the NLU engine used in the Snips Console that you can use to create awesome and private-by-design voice assistants. https://lnkd.in/gJ7YGiYv
To view or add a comment, sign in
-
As a personal project, I'm porting NumPy from Python and C to Ruby and Rust. Partly as an experiment and learning process. I'm working on learning Rust, spending some time diving deep into ML and scientific computing libraries. Then partly to give options beyond Python. I'm opinionated when it comes to Python, and I have a personal preference towards Ruby when it comes to that class of languages. Rust is easy enough to compile into a Ruby gem, especially with the "magnus" crate https://lnkd.in/dpx3-mjN Now the question is, how much would you trust AI to write code for a library that needs to be extremely high performance to really meet the needs of scientific computing, especially ML algorithms? Right now I'm using it to suggest project layouts, compilation methods for building the extension portion that allows easy communications with the Rust written extension via Ruby wrapper. Then any help I need in the process. Setting this up correctly has been a challenge, and AI has helped quite a bit. However, since Ruby, Rust, Magnus, and rake with rake-compile are further ahead than the model I'm using, which is GPT5 from OpenAI, the suggestions I get back don't necessarily work out of the box, so I still have to re-prompt and study source code and examples anyway, and read plenty of documentation to ensure my own understanding. LLMs are out of date the moment they are released, and no amount of underlying web searches with analysis are going to be a replacement from true realtime analysis via the underlying model itself. You might get a decide summary of the internet search performed, but the context just won't be there especially since most content will be out of date and there's little way for the AI to know that. This has also further solidified my opinion that juniors and college kids should not be using AI to generate code of any kind. If you can't do it from scratch you should be doing it with AI. Have you tried to do any direct ports with AI? What was your experience and lessons learned? Did you find any inefficiencies or mistakes that required a detailed solid rewrite with plenty of peer human review?
To view or add a comment, sign in
-
Update on this. After digging into the NumPy source, finding code with a "written on" date in 1978 is some of the most legacy code I think I've ever found. The last update was claimed in 1983 to change array instantiation from array(1) to array(*) for variable length payloads in structs that aren't always known until runtime. Probably one of the most legacy codebases I've ever seen. There is constant interweaving of C and Python and then Cython bindings. Honestly major kudos to the maintainers for keeping this going, modernizing where possible, and keeping something this hard to modify so stable. It's making my port all the more difficult ;) Seriously though, you guys have done a great job over the years working on this beast. I think it's time for a major cleanup effort, and I mean that with reverence to an amazing tool.
As a personal project, I'm porting NumPy from Python and C to Ruby and Rust. Partly as an experiment and learning process. I'm working on learning Rust, spending some time diving deep into ML and scientific computing libraries. Then partly to give options beyond Python. I'm opinionated when it comes to Python, and I have a personal preference towards Ruby when it comes to that class of languages. Rust is easy enough to compile into a Ruby gem, especially with the "magnus" crate https://lnkd.in/dpx3-mjN Now the question is, how much would you trust AI to write code for a library that needs to be extremely high performance to really meet the needs of scientific computing, especially ML algorithms? Right now I'm using it to suggest project layouts, compilation methods for building the extension portion that allows easy communications with the Rust written extension via Ruby wrapper. Then any help I need in the process. Setting this up correctly has been a challenge, and AI has helped quite a bit. However, since Ruby, Rust, Magnus, and rake with rake-compile are further ahead than the model I'm using, which is GPT5 from OpenAI, the suggestions I get back don't necessarily work out of the box, so I still have to re-prompt and study source code and examples anyway, and read plenty of documentation to ensure my own understanding. LLMs are out of date the moment they are released, and no amount of underlying web searches with analysis are going to be a replacement from true realtime analysis via the underlying model itself. You might get a decide summary of the internet search performed, but the context just won't be there especially since most content will be out of date and there's little way for the AI to know that. This has also further solidified my opinion that juniors and college kids should not be using AI to generate code of any kind. If you can't do it from scratch you should be doing it with AI. Have you tried to do any direct ports with AI? What was your experience and lessons learned? Did you find any inefficiencies or mistakes that required a detailed solid rewrite with plenty of peer human review?
To view or add a comment, sign in
-
From Confused to Confident: Revised 45 Python String Methods in One Day except Regular Expressions, F-String and Modulo Operator. Today, I went deep into Python’s string built-in functions — and honestly, it changed how I look at text processing. - From basics like: ◽ capitalize(), lower(), upper() - To powerful tools like: ◽ replace(), split(), join() ◽ maketrans() & translate() ◽ format() & format_map() - And even advanced checks like: ◽ isdigit(), isnumeric(), isidentifier() - One key takeaway: Small string methods = Huge real-world impact (data cleaning, NLP, automation, etc.) I didn’t just read them — I tested edge cases, explored Unicode behavior, and understood why they work. #Python #CodingJourney #DataScience #Programming #LearnInPublic #PythonDeveloper #AI #MachineLearning #Developers #CodeNewbie #DataCleaning #DataAnalytics
To view or add a comment, sign in
-
📘 What I Learned Today: Pythonic Thinking Today’s focus was not just writing Python code — but writing it the right way. 🔹 Key concepts: → Iterators & generators (memory-efficient data handling) → zip, enumerate, map, filter, reduce (clean transformations) → Shallow vs deep copy (avoiding hidden bugs) → Mutability vs immutability (understanding data behavior) → *args & **kwargs (flexible function design) 🔹 In simple terms: Pythonic thinking is about writing cleaner, smarter, and more efficient code instead of longer code. 🔹 Why it matters in AI: AI workflows involve large datasets and complex transformations — efficient and bug-free code makes a huge difference. 🔹 My takeaway: Good Python code is not just about “working” — it’s about being readable, efficient, and scalable. #AI #Python #LearningInPublic #CleanCode #TechJourney #BuildInPublic
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development