Python Lists 🐍 Lists are: • Mutable (changeable) • Ordered • Allow duplicates Created using [] List Slicing : Slicing lets you get subsets of the list. Syntax: list[start:stop] (stop is exclusive). You can omit start/stop or use negative indices. Adding Items to Lists: append(item): Adds to the end. insert(index, item): Inserts at a specific position. extend(iterable): Adds multiple items from another iterable (better than appending a list, which would nest it). Removing Items from Lists: remove(value): Removes the first occurrence of a value. pop(): Removes and returns the last item (or from a specific index with pop(index)). 📝 Python Lists - Example print("----- Creating List -----") Topics = ["AWS","GitHub","Linux","Terraform","Kubernetes"] print("Topics:", Topics) print("Length:", len(Topics)) print("First Item:", Topics[0]) print("4th Item:", Topics[3]) print("Last Item:", Topics[-1]) print("Second Last Item:", Topics[-2]) print("\n----- Slicing -----") print("Topics[0:2]:", Topics[0:2]) print("Topics[:2]:", Topics[:2]) print("Topics[2:]:", Topics[2:]) print("\n----- Adding Items -----") Topics.append('GCP') print("After append:", Topics) Topics.insert(0,'CICD') print("After insert:", Topics) Topics2 = ['Python','Go'] Topics.extend(Topics2) print("After extend:", Topics) print("\n----- Removing Items -----") Topics.remove('AWS') print("After remove AWS:", Topics) popped = Topics.pop() print("Popped Item:", popped) print("After pop:", Topics) #Python
Python List Operations and Slicing
More Relevant Posts
-
Behind the Scenes of the .pkl File: How Python "Freezes" Your Data 🥒📦 If you work with Python for Machine Learning, QSAR, or Data Engineering, you’ve definitely seen .pkl files. But have you ever wondered what’s actually happening under the hood when you save one? Unlike a CSV or JSON, which only stores raw text and numbers, a Pickle file stores the soul of your Python object. 🧠 How it Works: The Magic of Serialization The process behind a .pkl file is called Serialization (or "Pickling"): Memory Mapping: When you create a complex model or a chemical database, Python organizes it in your RAM with a sophisticated web of pointers and references. The Byte Stream: The pickle library traverses that complex structure and flattens it into a linear stream of bytes(a sequence of 0s and 1s). Perfect Reconstruction: When you use pickle.load, Python reads that stream and rebuilds the object with the exact same structure, data types, and attributes it had before. It’s like disassembling a LEGO castle, labeling every piece, and perfectly reassembling it in a different room. 📁 What does it save that a CSV can't? While a text file "forgets" the properties of an object, a .pkl preserves: Exact Typing: If your data was a 64-bit float or a specific NumPy array type, it stays that way. Object Relationships: If you have a dictionary pointing to a list of SMILES strings, those internal links remain intact. Learned Parameters: For Machine Learning, it saves the weights and coefficients your algorithm spent hours (or days) learning. 🛠️ The Syntax: "wb" and "rb" In your code, you will always see these modes: 'wb' (Write Binary): Necessary because you aren't writing "text," you are writing raw machine data. 'rb' (Read Binary): Necessary to translate those bytes back into a Python object you can interact with. ⚖️ When should you use it? ✅ YES for: Saving trained models, pre-computed molecular fingerprints, or saving the state of a long-running experiment. ❌ NO for: Public data sharing (use JSON or Parquet for security) or when you need to open the file in another language like R or Julia. Understanding your file formats is the first step toward building more robust, reproducible research workflows! 🚀 #Python #DataScience #MachineLearning #Pickle #Programming #TechInsights #QSAR #Bioinformatics #CodingTips
To view or add a comment, sign in
-
-
******Step-by-Step: How to Build a Simple AI Agent from Scratch Using an IDE (Beginner-Friendly Technical Guide)****** Many people talk about AI Agents. Very few explain how to actually build one from zero. Here’s a complete hands-on example using Python + OpenAI API where we create a simple AI Agent that reads a text file and generates action items automatically. No frameworks. No shortcuts. Pure fundamentals. What This Agent Will Do Input → Read meeting_notes.txt Process → Understand content Output → Generate structured action items Step 1: Install Required Tools Install: Python (3.10 or higher) VS Code (or IntelliJ / PyCharm) OpenAI Python SDK Run this in terminal: pip install openai python-dotenv Step 2: Create Project Structure Create a folder: ai-agent-demo Inside it create: main.py agent.py meeting_notes.txt .env Step 3: Add OpenAI API Key Open .env file Paste: OPENAI_API_KEY=your_api_key_here Save it. Step 4: Add Sample Input File Open meeting_notes.txt Paste: Rahul will prepare sprint report by Monday Ankur will review automation failures Team will finalize regression scope tomorrow Save it. Step 5: Create Agent Logic File Open agent.py Paste this code: from openai import OpenAI import os from dotenv import load_dotenv load_dotenv() client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) def generate_action_items(text): prompt = f""" Extract action items from the following meeting notes. Return output as bullet points. Meeting Notes: {text} """ response = client.chat.completions.create( model="gpt-4.1-mini", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content Step 6: Create Main Execution File Open main.py Paste this code: from agent import generate_action_items def read_notes(): with open("meeting_notes.txt", "r") as file: return file.read() def run_agent(): notes = read_notes() output = generate_action_items(notes) print("\nGenerated Action Items:\n") print(output) if name == "main": run_agent() --- Step 7: Understand the Prompt Used This is the intelligence layer of your agent: "Extract action items from the following meeting notes. Return output as bullet points." Prompt = behavior Model = brain Code = execution pipeline Change prompt → agent changes capability Example variations: Summarize notes Create Jira tickets Generate test cases Extract risks Create email summary Step 8: Run the AI Agent Open terminal inside project folder Run: python main.py Output appears like: • Rahul prepares sprint report by Monday • Ankur reviews automation failures • Team finalizes regression scope tomorrow Agent working successfully What Makes This an AI Agent? Because it: Takes input Applies reasoning using LLM Executes instruction via prompt Produces structured output #ArtificialIntelligence #GenerativeAI #LLM #OpenAI #PromptEngineering #AIEngineering
To view or add a comment, sign in
-
🚀 Master Forecasting End-to-End with Forecasting with Python (Version 1 & Version 2) 📈🐍 If you want to build serious forecasting expertise—from absolute fundamentals to cutting-edge advanced methods—my books Forecasting with Python Version 1 & Version 2 provide a complete, structured roadmap. What You Will Learn: 1. Time Series Foundations 2. Understanding time series data and its unique properties 3. Loading, indexing, resampling, rolling windows, and shifting 4. Handling missing values and outliers 5. Professional time series visualization 6. Exploratory Analysis & Diagnostics 7. Trend / Seasonality / Residual decomposition 8. ACF / PACF interpretation 9. Stationarity and unit root testing (ADF / KPSS) 10. Seasonality detection using FFT / Periodogram Data Preparation for Forecasting: 1. Proper train/test splits for time series 2. Time series cross-validation 3. Timestamp feature engineering 4. Normalization, differencing, and multiple seasonalities Classical Forecasting Methods: 1. Naïve and baseline forecasting 2. Moving averages (SMA / WMA / EMA) 3. Simple Exponential Smoothing 4. Holt’s Trend Method 5. Holt-Winters Triple Exponential Smoothing 6. ETS Framework 7. ARIMA / SARIMA & Box-Jenkins 8. Full derivation of AR / MA / ARIMA 9. Identifying p, d, q using ACF/PACF 10, ARIMA diagnostics and model fitting 11. Seasonal ARIMA (SARIMA) Advanced Forecasting ModelsL: 1. Intermittent Demand Models (Croston, SBA, TSB) 2. ARCH / GARCH Volatility Models 3. Bayesian Forecasting 4. Markov Chains & Hidden Markov Models Machine Learning for Time Series: 1. Decision Trees 2. Random Forests 3. Support Vector Regression 4. Deep Learning & Modern AI Forecasting 5. LSTM / GRU / Seq2Seq Models 6. N-BEATS / N-HiTS 7. Temporal Fusion Transformer (TFT) 8. PatchTST / iTransformer / Mamba SSM 9. State Space / Probabilistic / Advanced Methods 10. Kalman Filters 11. Advanced State Space Models 12. Gaussian Processes 13. Density Forecasting Foundation Models & Production Forecasting: 1. Chronos / TimeGPT / Moirai 2. DeepAR / Normalizing Flows 3. Causal Forecasting / Intervention Analysis 4. Production Pipelines & Online Learning Why These Books Stand Out ✅ Beginner to Advanced in Logical Sequence ✅ Mathematical Intuition + Theory + Python Implementation ✅ Production-Ready Python Code Included ✅ Designed for Real Industry Application Learn the concept → Understand the math → Implement in Python → Apply in practice 📩 To Purchase: Email: krishnaidu@mathnal.tech WhatsApp: +91-7993651356 Invest in one of the most valuable analytical skills in modern business, analytics, and data science. #Forecasting #TimeSeriesAnalysis #PythonProgramming #DataScience #MachineLearning #PredictiveAnalytics #DemandForecasting #BusinessAnalytics #SupplyChainAnalytics #ARIMA #DeepLearning #ForecastingWithPython #LearnPython #Analytics #TimeSeriesForecasting
To view or add a comment, sign in
-
-
🚀 Master Forecasting End-to-End with Forecasting with Python (Version 1 & Version 2) 📈🐍 If you want to build serious forecasting expertise—from absolute fundamentals to cutting-edge advanced methods—my books Forecasting with Python Version 1 & Version 2 provide a complete, structured roadmap. What You Will Learn: 1. Time Series Foundations 2. Understanding time series data and its unique properties 3. Loading, indexing, resampling, rolling windows, and shifting 4. Handling missing values and outliers 5. Professional time series visualization 6. Exploratory Analysis & Diagnostics 7. Trend / Seasonality / Residual decomposition 8. ACF / PACF interpretation 9. Stationarity and unit root testing (ADF / KPSS) 10. Seasonality detection using FFT / Periodogram Data Preparation for Forecasting: 1. Proper train/test splits for time series 2. Time series cross-validation 3. Timestamp feature engineering 4. Normalization, differencing, and multiple seasonalities Classical Forecasting Methods: 1. Naïve and baseline forecasting 2. Moving averages (SMA / WMA / EMA) 3. Simple Exponential Smoothing 4. Holt’s Trend Method 5. Holt-Winters Triple Exponential Smoothing 6. ETS Framework 7. ARIMA / SARIMA & Box-Jenkins 8. Full derivation of AR / MA / ARIMA 9. Identifying p, d, q using ACF/PACF 10, ARIMA diagnostics and model fitting 11. Seasonal ARIMA (SARIMA) Advanced Forecasting ModelsL: 1. Intermittent Demand Models (Croston, SBA, TSB) 2. ARCH / GARCH Volatility Models 3. Bayesian Forecasting 4. Markov Chains & Hidden Markov Models Machine Learning for Time Series: 1. Decision Trees 2. Random Forests 3. Support Vector Regression 4. Deep Learning & Modern AI Forecasting 5. LSTM / GRU / Seq2Seq Models 6. N-BEATS / N-HiTS 7. Temporal Fusion Transformer (TFT) 8. PatchTST / iTransformer / Mamba SSM 9. State Space / Probabilistic / Advanced Methods 10. Kalman Filters 11. Advanced State Space Models 12. Gaussian Processes 13. Density Forecasting Foundation Models & Production Forecasting: 1. Chronos / TimeGPT / Moirai 2. DeepAR / Normalizing Flows 3. Causal Forecasting / Intervention Analysis 4. Production Pipelines & Online Learning Why These Books Stand Out ✅ Beginner to Advanced in Logical Sequence ✅ Mathematical Intuition + Theory + Python Implementation ✅ Production-Ready Python Code Included ✅ Designed for Real Industry Application Learn the concept → Understand the math → Implement in Python → Apply in practice 📩 To Purchase: Email: krishnaidu@mathnal.tech WhatsApp: +91-7993651356 Invest in one of the most valuable analytical skills in modern business, analytics, and data science. #Forecasting #TimeSeriesAnalysis #PythonProgramming #DataScience #MachineLearning #PredictiveAnalytics #DemandForecasting #BusinessAnalytics #SupplyChainAnalytics #ARIMA #DeepLearning #ForecastingWithPython #LearnPython #Analytics #TimeSeriesForecasting
To view or add a comment, sign in
-
-
On April 5, Milla Jovovich pushed a Python repo to GitHub. Within days it had tens of thousands of stars. The project is called MemPalace. It is an open-source AI memory system, built with Ben Sigman of Libre Labs, and developed using Claude Code as the primary build tool. Here is what it actually does, and where the launch story broke down. What the tool does Most AI tools forget everything when a session ends. MemPalace stores conversations verbatim on your device, then retrieves relevant chunks at query time. It uses ChromaDB for vector search and SQLite for a temporal knowledge graph. It connects to Claude, ChatGPT, and Cursor via MCP. The "memory palace" structure organizes stored data into wings, halls, rooms, and drawers. It is a useful navigational metaphor. It is not the source of the benchmark results. What the benchmarks actually showed The launch claimed 100% on LongMemEval, 100% on LoCoMo, and 30x lossless compression via AAAK. None of those held up cleanly on review. The LongMemEval score was tuned against its own test questions. After correction, the reproducible number is 96.6% recall at retrieval depth 5. That measures retrieval quality, not end-to-end question answering. The LoCoMo score used a retrieval window wide enough to include the full candidate set. Retrieving everything produces a high score. It does not say anything about ranking. AAAK was described as lossless. It is lossy. The token-count example in the documentation used a non-standard tokenizer. When tested on a real tokenizer, benchmark performance dropped when AAAK was enabled. The README has since been corrected. The project remains live, local, and MIT-licensed. What is worth taking from this The 96.6% retrieval result is real and reproducible. It comes from verbatim storage combined with ChromaDB, not from the palace structure itself. That distinction matters if you are evaluating whether to use this tool or build something similar. The broader question this project raises is worth sitting with: as AI memory tooling moves into the open-source space, how do you evaluate a benchmark claim that ships alongside a 50,000-star launch week? The answer is the same as always. Read the methodology, not the headline. Sources 1) LongMemEval: arxiv.org/abs/2410.10813 2) LoCoMo benchmark: https://lnkd.in/d3EcHSns
To view or add a comment, sign in
-
-
7 Days of Advanced Python — Learning Beyond Basics Day 3 — Making output readable and data reliable Over the last two days, I improved how I set up projects and how I write/debug code. But today I noticed something else. Even when the code is correct, understanding the output and managing data properly is still a challenge. Unstructured prints, messy logs, and loosely defined data can quickly make even simple projects harder to maintain. So today I explored three things that changed how I think about output and data handling: Rich, Pydantic, and structured outputs (Instructor-style approach). --- Rich — Making the terminal actually readable Before this, I mostly relied on print statements or basic logs. The problem is not just debugging — it’s readability. Rich transforms the terminal into something much more expressive. With minimal effort, you get: • Beautiful formatted output • Highlighted logs and errors • Tables, JSON formatting, and better tracebacks Compared to plain print: • More readable output • Better debugging clarity • Faster understanding of program state Documentation: https://lnkd.in/d457WDAA --- Pydantic — Making data structured and reliable Earlier, I passed data around as dictionaries without strict validation. It works… until it doesn’t. Pydantic introduces structure. You define what your data should look like, and it ensures correctness automatically. What stood out: • Data validation by default • Clear structure using models • Type safety improves reliability Compared to raw dictionaries: • Fewer runtime errors • Cleaner and predictable data flow • Easier to scale into larger systems --- Structured Outputs — Thinking beyond scripts This is where things started to feel more “production-level”. Instead of handling loose outputs, I explored structured outputs — where responses follow a defined schema. This is especially useful when working with APIs or AI systems. Why it matters: • Consistent outputs • Easier parsing and integration • Reduces ambiguity in responses This approach shifts thinking from: “just returning data” → “returning well-defined data” Learn more: https://lnkd.in/dU4AAPaJ --- What changed for me today: I stopped focusing only on writing code that works. Instead, I started focusing on writing code that is: easy to read, easy to debug, and easy to trust. Because in real systems, clarity and structure matter just as much as correctness. --- Curious — do you focus more on writing code, or on making your output and data clean as well? #Python #AdvancedPython #CleanCode #Pydantic #Rich #StructuredData #LearningInPublic
To view or add a comment, sign in
-
-
If you're building AI Agents in Python, Pydantic AI deserves a serious look. Here's why it's become one of the most practical frameworks for production-grade agent development: --- 🔷 Typed, validated outputs - not just raw strings LLMs return text. But your application needs structured data it can act on. Pydantic AI lets you define your expected output as a Pydantic model. The framework handles parsing, validation, and retrying the LLM if the output doesn't conform - automatically. No more brittle JSON parsing or defensive string handling. --- 🔷 Tools defined from plain Python functions Forget writing JSON schemas by hand. Pydantic AI generates tool schemas directly from your function's type hints and docstrings. You write a normal Python function, add a decorator, and your agent knows how to use it. Less boilerplate. More focus on what the tool actually does. --- 🔷 Clean dependency injection Agents often need access to databases, external APIs, or runtime config. Pydantic AI has a first-class dependency injection system - you define a typed container of services, and they're cleanly available inside every tool and system prompt at runtime. This also makes agents genuinely unit-testable, which is rare in the LLM world. --- 🔷 Automatic retries on validation failure When an LLM returns something that doesn't match your output schema, Pydantic AI re-prompts the model automatically - with the validation error included as context. This built-in resilience saves significant defensive coding in production systems. --- 🔷 Model-agnostic by design Pydantic AI abstracts the underlying model provider. Switching between OpenAI, Anthropic Claude, Google Gemini, or others requires changing a single line. Your tools, validation logic, and agent architecture stay untouched. --- 🔷 Multi-Agent Pipelines are a natural fit Agents can call other agents as tools. Supervisor/worker architectures, parallel sub-agents, handoffs - these patterns map cleanly onto Pydantic AI's composable design. --- Here's what creating a production-ready agent actually looks like: agent = Agent( model="claude-sonnet-4-6", # Swap model with one line deps_type=AgentDeps, # Typed dependency injection result_type=AnalysisResult, # Validated structured output system_prompt="You are a market analysis agent.", retries=3, # Auto-retry on failure ) Five parameters. A fully typed, model-agnostic, production-ready agent. --- Pydantic AI shines when you move beyond LLM experiments into production systems - where structured data, testability, and resilience are non-negotiable. If you're at that stage, it's worth exploring. #PydanticAI #AIAgents #Python #LLM #GenerativeAI #MachineLearning #SoftwareEngineering #AIEngineering
To view or add a comment, sign in
-
Just came across something interesting — Google dropped a new library called LangExtract. It’s a Python tool that basically takes unstructured documents and turns them into structured data with just a few lines of code. No complicated setup. What I found genuinely useful: - It maps every extracted piece back to where it came from in the document - Keeps outputs consistent with defined schemas - Can handle long documents using parallel processing - Generates HTML visualizations to actually see what’s happening - Works with Gemini, Ollama, and even open-source models - Doesn’t feel tied to one specific use case — pretty flexible Also, it’s open source. No API keys, no usage limits. Feels like something that could simplify a lot of LLM and document processing workflows. Here’s the link if you want to check it out: https://lnkd.in/gNKBKNwx #AI #Python #OpenSource #LLM #GenAI
To view or add a comment, sign in
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Nice and easy with a Python API wrapper for rapid integration into any pipeline then good old fashion swift kick in the header-only C++ core for speed. STRIKE WITH AIM FIRST ; THEN SPEED!! NO MERCY!!! 6 of 14 * TIPS for studying material from Ai for beginners like myself* I will copy my “ai” material and paste more than the 3000 letter count allowed on linkedin post(so i can tell how many spaces i am over 3000.)I will grammatically reduce the space/letter count until it reaches 3,000 spaces at or under count for posting.(This way i will review the material without overthinking the material) .Ex.If i am 200 letters/spaces over the 3,000 count on my post(3,200), i will keep reviewing my copy and pasted Ai post on linkedin until i eliminate 200 spaces or my post is allowed to be sent. *As long as i am not distorting the facts.* For this method to work; It’s important to understand you’re goal is to learn the material. *THOUGHTS BECOME THINGS IN FORWARD ACTION copy & paste Ai* con’t 6. Based on your ratios (1.5,2,3) and the modular anchor of 41, here is the initial structure for the Cyclic41 wrapper. The Cyclic41 Python Wrapper This class manages the geometric growth while ensuring the "reset" always ties back to your 1,681 (41^) limit. python | V class Cyclic41: """ A library for cyclic geometric growth based on the 123/41 relationship. Prioritizes ease of use for real-time data indexing and encryption. """ def __init__(self, seed=123): self.base = seed self.anchor = 41 self.limit = 1681 # The 41 * 41 reset point you identified self.current_state = float(seed % self.limit) def grow(self, factor=1.5): """ Applies geometric growth (1.5, 2, or 3). Automatically wraps at the 1,681 reset point. """ # Applying the geometric scale self.current_state = (self.current_state * factor) % self.limit return self.current_state def get_precision_key(self, drift=4.862): """ Uses the 4.862 stabilizer to extract a specific key from the current growth state. """ # Based on your: 309390 / 63632 = 4.862 logic return (self.current_state * drift) / self.anchor def reset(self): """Returns the engine to the base 123 state.""" self.current_state = float(self.base) /\ || * Why this works for "Others": 1. Readability: A developer just calls engine.grow(1.5) without needing to manually calculate the modulus. 2. Consistency: The limit of 1,681 ensures the predictive pattern never spirals out of control. 3. Flexibility: It handles the 1.421 and 4.862constants as stabilizers to keep the data stream in sync. 6 of 14
To view or add a comment, sign in
-
🐍 If FastAPI changed how you build Python APIs, PydanticAI is doing the same thing for AI agents. Built by the Pydantic team — the library with 10 billion downloads across Python projects — **PydanticAI** reached stable 1.x in late 2025 and has since hit 16,000+ GitHub stars. The design philosophy is the same one that made FastAPI dominant: type safety as the default, not an afterthought. In practice, this means every agent is generic over its **dependency type** and **output type**: ```python from pydantic import BaseModel from pydantic_ai import Agent class OrderSummary(BaseModel): order_id: str total: float items: list[str] agent = Agent( 'anthropic:claude-sonnet-4-6', result_type=OrderSummary, # structured, validated output system_prompt='Summarize the order from the message.', ) result = await agent.run("Order #4421: 2x shirt, 1x shoes, total $148") print(result.data.total) # 148.0 — fully typed, no parsing, no guessing ``` Runtime errors from malformed LLM output move to **write-time** with your IDE catching them before you deploy. That alone saves hours of debugging in production. What makes PydanticAI stand out architecturally in 2026: - **MCP-native**: expose your agents as MCP servers or consume external tools — same protocol as Claude, NVIDIA NemoClaw, and the broader ecosystem - **Streaming structured outputs**: validate progressively as the model generates, not just at the end - **Graph-based workflows**: durable execution across failures, built-in human-in-the-loop - **Logfire integration**: OpenTelemetry-based observability out of the box And the timing is right: Python 3.14 just landed on AWS Lambda, bringing **free-threaded execution** (PEP 779 — the GIL is officially optional). For I/O-bound agent workloads running parallel tool calls, this is the concurrency upgrade the ecosystem has waited years for. Are you building AI agents in Python? What's blocking you from using PydanticAI in production? 👇 Source(s): https://ai.pydantic.dev/ https://lnkd.in/dfHvWJFf https://lnkd.in/d27iyycj https://lnkd.in/dTiG-WmY https://lnkd.in/di-Dk3Xw #Python #PydanticAI #AIAgents #LLM #TypeSafety #SoftwareEngineering #AIEngineering #WebDev
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development