Refactored my Piotroski F-Score module into a fully standalone Python component today. I removed QuantConnect dependencies and redesigned the file so it can score stocks from either: 1. Local CSV fundamentals. 2. Online financial statements (via yfinance). What’s inside: - A clean PiotroskiFactors dataclass for standardized inputs. - Core PiotroskiScore logic across all 9 F-Score signals. - Input adapters: - compute_piotroski_from_csv(...) - compute_piotroski_from_online(...) Why this matters: - Portability: run it in any Python environment. - Reusability: drop it into screening pipelines, notebooks, or APIs. - Transparency: explicit factor construction and scoring logic. - Extensibility: easy to plug into broader quant workflows. This refactor is part of a broader effort to make my quant stack platform-agnostic, testable, and production-friendly. Next step: add a simple CLI and batch scoring across a universe of tickers. If you’re working on fundamental factor models, I’d love to compare approaches for handling missing/dirty statement data across providers. #QuantFinance #AlgorithmicTrading #Python #MachineLearning #TradingSystems #DataScience #RiskManagement #TimeSeries #SoftwareEngineering
Piotroski F-Score Module Refactored for Python
More Relevant Posts
-
Built a Python-based Directory Sync Tool to compare and synchronize files between two directories with reliability and control. Instead of relying only on file names or timestamps, the tool uses a combination of metadata and SHA-256 hashing to accurately detect new, modified, and missing files. Key highlights: • Recursive directory scanning with structured metadata (name, extensions, size, hash) • Efficient change detection using size-first filtering followed by hash comparison • Memory-efficient hashing using chunk-based file reading (handles large files) • Synchronization support with metadata preservation using shutil.copy2 • Safe cleanup by optionally removing extra files from the destination While building this, I focused on moving beyond a basic script and treating it like a real tool, structuring the code into clear components, improving output readability, and adding validation and error handling to make it more reliable in real use. GitHub:https://lnkd.in/gt-Ec3rF #Python #CLI #GitHubProjects #SoftwareDevelopment #LearningByBuilding #SystemsThinking
To view or add a comment, sign in
-
🚀 Stop looping through your DataFrames! I recently refactored a script processing 10 million rows. We were using a standard row-wise loop, which was choking our CI/CD pipeline and causing memory spikes. Before optimisation: for i, row in df.iterrows(): df.at[i, 'tax_total'] = row['price'] * 1.08 if row['state'] == 'NY' else row['price'] After optimisation: import numpy as np conditions = [df['state'] == 'NY'] choices = [df['price'] * 1.08] df['tax_total'] = np.select(conditions, choices, default=df['price']) Performance gain: 45x faster and 90% lower memory usage. By moving from row-wise iteration to NumPy’s vectorized selection, we eliminated the Python-level overhead entirely. The code is not only faster but cleaner and more readable for the rest of the team. Vectorization turns O(n) Python operations into high-performance C-level loops. It’s the single biggest quick win you can apply to any data pipeline. Have you ever seen a loop-heavy process that you successfully migrated to vectorized operations? #DataEngineering #Python #Pandas #PerformanceTuning #CodingTips
To view or add a comment, sign in
-
𝗗𝗮𝘆 𝟲𝟳/𝟳𝟱 | 𝗟𝗲𝗲𝘁𝗖𝗼𝗱𝗲 𝟳𝟱 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: 338. Counting Bits 𝗗𝗶𝗳𝗳𝗶𝗰𝘂𝗹𝘁𝘆: Easy 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘂𝗺𝗺𝗮𝗿𝘆: Given an integer n, return an array where each index i (0 ≤ i ≤ n) contains the number of 1’s in the binary representation of i. 𝗠𝘆 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: This problem is solved efficiently using Dynamic Programming with bit manipulation. Instead of converting each number to binary separately, we reuse previously computed results. • Key Observation: – Right shifting a number (i >> 1) removes the last bit – (i & 1) tells whether the last bit is 1 or 0 • Transition: – ans[i] = ans[i >> 1] + (i & 1) This means: Take the count of 1’s from i/2 and add 1 if the current number is odd. • Base Case: – ans[0] = 0 • Final answer: – Array ans of size n + 1 This avoids repeated binary conversions and builds the solution in a bottom-up manner. 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: • Time Complexity: O(n) • Space Complexity: O(n) 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Problems involving binary representations often have patterns that can be reused. Bit manipulation + DP is a powerful combination for optimizing such computations. 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗟𝗶𝗻𝗸: https://lnkd.in/gydkB_rv #Day67of75 #LeetCode75 #DSA #Java #Python #DynamicProgramming #BitManipulation #MachineLearning #DataScience #ML #DataAnalyst #LearningInPublic #TechJourney #LeetCode
To view or add a comment, sign in
-
-
One of the most common FastAPI mistakes is reusing SQLAlchemy models as Pydantic response schemas. This doesn't just couple your layers—it makes it easy to accidentally expose fields like hashed passwords. In my latest post, I share 8 Sections of battle-tested patterns for structuring your FastAPI applications: 1️⃣ Centralized Config: Using pydantic-settings instead of scattered os.environ calls. 2️⃣ Dependency Injection: Leveraging Depends() for cleaner auth and DB sessions. 3️⃣ Business Logic: Moving core logic into pure Python service functions for easier unit testing. 4️⃣ Error Handling: Using app-level exception handlers to remove "try/except" noise from routes. Swipe through the PDF for the code examples and folder structures! ➡️ #FastAPI #Python #BackendDevelopment #WebDevelopment #CleanCode
To view or add a comment, sign in
-
A Python function that checks stationarity across multiple financial symbols—a critical step anyone serious about algorithmic trading should master. Why does it matter? Many profitable strategies (mean reversion, pairs trading, cointegration) assume stationary data. If your underlying series is non-stationary (trending), your strategy will fail spectacularly. The approach: I'm using dual statistical tests—ADF and KPSS—because relying on just one can be misleading. Both must agree for a true verdict. What it does: ✓ Downloads price data from any symbol ✓ Tests both raw prices and log returns ✓ Returns p-values + actionable verdict ✓ Works with local CSV files too This is table-stakes for quant work. Whether you're building mean-reversion bots or testing factor strategies, validating stationarity upfront saves months of debugging DOA strategies. github: Chinedum14/Quant-Dev Happy building! 📈 #QuantTrading #AlgorithmicTrading #DataScience #Python #TimeSeries #SoftwareEngineering #Developer
To view or add a comment, sign in
-
-
Marimo, an open-source Python notebook for data science, had an RCE flaw exploited within 10 hours of disclosure. No PoC existed, attackers built their exploit directly from the advisory description. The vulnerability was an unauthenticated WebSocket endpoint that gave full shell access. Data science tools running in production are becoming primary targets, and the time to exploit window keeps shrinking. Patch immediately if you're using Marimo.
To view or add a comment, sign in
-
You write for loops every day. Do you know what actually runs underneath them? Day 03 of 30 -- Generators and Iterators Deep Dive Advanced Python + Real Projects Series Python calls iter() to get the iterator, then next() repeatedly until StopIteration is raised. That is every for loop you have ever written. And yield pauses the function, hands the value out, and resumes from the exact same line next time. Today's topic covers: The lazy vs eager evaluation problem -- why loading 10GB into a list crashes servers The full iterator protocol -- what powers every for loop 3 types -- generator function, expression, async generator Annotated syntax -- basic, yield from, and the send() two-way pattern Real fintech pipeline -- 52GB log file, 4.2MB memory used 5 production mistakes including exhausting a generator twice Generator pipeline architecture -- identical to Unix pipes Key insight: Don't store what you can stream. #Python #PythonProgramming #DataEngineering #BackendDevelopment #LearnPython #100DaysOfCode #PythonDeveloper #SoftwareEngineering #TechContent #BuildInPublic #TechIndia #CleanCode #CodingTips #CodeNewbie #LinkedInCreator #PythonTutorial
To view or add a comment, sign in
-
Day 30 of #100DaysOfPython 𝐔𝐩𝐠𝐫𝐚𝐝𝐞𝐝 𝐭𝐡𝐞 𝐏𝐚𝐬𝐬𝐰𝐨𝐫𝐝 𝐌𝐚𝐧𝐚𝐠𝐞𝐫 𝐀𝐩𝐩 𝐭𝐨𝐝𝐚𝐲. I moved from saving data in a text file to using JSON, which makes the data more structured and easier to manage. I also added a search feature to retrieve saved credentials. 𝐖𝐡𝐚𝐭 𝐈 𝐢𝐦𝐩𝐫𝐨𝐯𝐞𝐝: 𝑺𝒘𝒊𝒕𝒄𝒉𝒆𝒅 𝒕𝒐 𝑱𝑺𝑶𝑵 𝒇𝒐𝒓 𝒔𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒆𝒅 𝒅𝒂𝒕𝒂 𝒔𝒕𝒐𝒓𝒂𝒈𝒆 𝑰𝒎𝒑𝒍𝒆𝒎𝒆𝒏𝒕𝒆𝒅 𝒔𝒆𝒂𝒓𝒄𝒉 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏𝒂𝒍𝒊𝒕𝒚 𝒇𝒐𝒓 𝒔𝒂𝒗𝒆𝒅 𝒑𝒂𝒔𝒔𝒘𝒐𝒓𝒅𝒔 𝑯𝒂𝒏𝒅𝒍𝒆𝒅 𝒆𝒓𝒓𝒐𝒓𝒔 𝒖𝒔𝒊𝒏𝒈 𝒕𝒓𝒚/𝒆𝒙𝒄𝒆𝒑𝒕 (𝒎𝒊𝒔𝒔𝒊𝒏𝒈 𝒇𝒊𝒍𝒆, 𝒎𝒊𝒔𝒔𝒊𝒏𝒈 𝒅𝒂𝒕𝒂, 𝒆𝒎𝒑𝒕𝒚 𝑱𝑺𝑶𝑵) 𝑰𝒎𝒑𝒓𝒐𝒗𝒆𝒅 𝒐𝒗𝒆𝒓𝒂𝒍𝒍 𝒖𝒔𝒆𝒓 𝒆𝒙𝒑𝒆𝒓𝒊𝒆𝒏𝒄𝒆 𝒊𝒏 𝒕𝒉𝒆 𝑮𝑼𝑰 This version feels much closer to a real application. Managing data properly and handling edge cases made a big difference. Also a good reminder that writing code is one thing, but making it robust and user-friendly is another level. #100DaysOfCode #100DaysOfPython #Python #Tkinter #PasswordManager #JSON #ErrorHandling #PythonProjects #LearningToCode #CodingJourney #BuildInPublic
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗗𝗕 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗹𝗲𝗮𝗸𝗲𝗱 𝗶𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻. 𝗧𝗵𝗲 𝗳𝗶𝗹𝗲 𝗵𝗮𝗻𝗱𝗹𝗲 𝘀𝘁𝗮𝘆𝗲𝗱 𝗼𝗽𝗲𝗻. 𝗧𝗵𝗲 𝗿𝗼𝘄 𝗹𝗼𝗰𝗸 𝗵𝘂𝗻𝗴 𝗶𝗻𝗱𝗲𝗳𝗶𝗻𝗶𝘁𝗲𝗹𝘆. These aren't edge cases—they are the inevitable result of making resource cleanup the caller's responsibility. In Python, 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗿𝘀 move that responsibility from the developer to the type itself. The resource becomes self-healing. 🔹 __exit__ is called even if an exception is raised—that is the safety guarantee. 🔹 @contextmanager lets you write the same protocol with 'yield'—no class needed. 🔹 Any resource with an acquire/release lifecycle belongs in a context manager. The 𝘸𝘪𝘵𝘩 statement isn't just syntactic sugar—it’s a contract. The caller writes business logic; the object handles the cleanup. #Python #SoftwareEngineering #BackendDevelopment #SoftwareArchitecture #CleanCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development