Peglib 0.2.0 + 0.2.1 -- Major code generation reliability release Peglib is a PEG parser library for Java, inspired by cpp-peglib. It lets you define parsers using PEG grammar syntax, with support for both runtime interpretation and standalone source code generation. These two releases focused on one goal: making the generated standalone parser produce identical results to the interpreted parser. Every time. What changed: -- Rewrote the CST/AST code generator from the ground up to structurally mirror the interpreter's control flow. Identified and fixed 7 behavioral divergences in whitespace handling, predicate evaluation, cut failure propagation, and token boundary tracking. -- Fixed generated parsers crashing with StackOverflowError when the whitespace directive references named rules like LineComment or BlockComment. Added a reentrant guard matching the interpreter's approach. -- Fixed generated Token nodes losing their rule names. Tokens from < > captures now carry the parent rule name (e.g., "SelectKW", "NumericType") instead of a generic "token". -- Fixed CST tree structure. Container expressions (ZeroOrMore, OneOrMore, Optional) now wrap their children in proper NonTerminal nodes instead of flattening them into the parent -- matching how the interpreter builds the tree. -- Added 40 conformance tests that run the same grammars and inputs through both the interpreted and generated parsers, asserting identical success/failure outcomes. These fixes were discovered while building a PostgreSQL SQL parser with peglib. The interpreted parser handled the full grammar correctly, but the generated standalone parser had subtle failures. Now both produce matching results on all 350 tests. Test count: 308 -> 350 pragmatica-lite dependency: 0.9.10 -> 0.24.0 Available on Maven Central: <groupId>org.pragmatica-lite</groupId> <artifactId>peglib</artifactId> <version>0.2.1</version> GitHub: https://lnkd.in/dgdjZahV #java #parsing #peg #opensource #compilers
Sergiy Yevtushenko’s Post
More Relevant Posts
-
🚀 Mastering Prefix Sum & Suffix Sum in Java (DSA) Understanding Prefix Sum and Suffix Sum is a game-changer in Data Structures & Algorithms. These concepts help optimize problems that involve range sums and reduce time complexity significantly. 🔹 What is Prefix Sum? Prefix Sum is an array where each element at index `i` stores the sum of elements from index `0` to `i`. 👉 Formula: prefix[i] = prefix[i-1] + arr[i] 👉 Java Example: int[] arr = {1, 2, 3, 4, 5}; int n = arr.length; int[] prefix = new int[n]; prefix[0] = arr[0]; for(int i = 1; i < n; i++) { prefix[i] = prefix[i - 1] + arr[i]; } // Output: [1, 3, 6, 10, 15] 🔹 What is Suffix Sum ? Suffix Sum is an array where each element at index `i` stores the sum from index `i` to the end of the array. 👉 Formula: suffix[i] = suffix[i+1] + arr[i] 👉 Java Example: int[] arr = {1, 2, 3, 4, 5}; int n = arr.length; int[] suffix = new int[n]; suffix[n - 1] = arr[n - 1]; for(int i = n - 2; i >= 0; i--) { suffix[i] = suffix[i + 1] + arr[i]; } // Output: [15, 14, 12, 9, 5] 💡 Why is this important? ✔ Reduces time complexity from O(n²) → O(n) ✔ Used in range sum queries ✔ Helps in solving problems like equilibrium index, subarray sums, etc. Pro Tip: Once you understand prefix sums, try solving problems like: Subarray Sum Equals K Pivot Index Range Sum Query ✨ Consistency in DSA is the key. Small concepts like these build strong problem-solving foundations. #DSA #Java #Programming #Coding #SoftwareEngineering #SDET #Learning
To view or add a comment, sign in
-
📝 Part 3: The gRPC DSL — Architecting the “Source of Truth” Hook: Protobuf isn’t just a file format—it’s a Domain Specific Language (DSL) 📜 👉 It defines a contract that ensures your microservices speak the same language—whether they’re written in Java, Go, or Python. 🏗️ The DSL Structure (Top → Bottom) A .proto file follows a clean structure: 1️⃣ Metadata (The Foundation) syntax = "proto3"; option java_package = "com.example.product"; 🔹 syntax = "proto3"; → Mandatory version declaration 🔹 java_package → Controls where Java classes are generated 2️⃣ Service (The API Interface) service ProductService { rpc getProduct (ProductRequest) returns (ProductResponse); } 🔹 service → Like a Java interface 🔹 rpc → Like a method (request → response) 3️⃣ Messages (The Data Models) message ProductRequest { int64 product_id = 1; } message ProductResponse { string name = 1; repeated string tags = 2; } 🔹 message → Like a Java class (POJO) 🔹 repeated → Equivalent to List<> 🔤 Common Data Types Protobuf supports multiple types, but commonly used ones include: int32, int64 → Numbers string → Text bool → Boolean double, float → Decimal values 🔢 The Rule of Tags (= 1, = 2) int64 product_id = 1; string name = 2; 👉 Over the network: Field names are NOT sent ❌ Only tag numbers + values are sent ✅ Everything is encoded in compact binary format ⚡ 💡 The receiver maps: Tag 1 → product_id Tag 2 → name ⚡ Why This Is Powerful Smaller payloads Faster communication Language independent Backward compatible 🧠 Why This Is a DSL Because you define: Structure → message Behavior → service Contract → .proto 👉 Code is automatically generated for any language 🔥 Final Takeaway Your .proto file is not just a schema—it’s your API blueprint Platform agnostic Strongly typed Binary efficient Always consistent That’s what makes gRPC scalable 🚀 👉 Next: Part 4 — Running your first gRPC server & client ⚡
To view or add a comment, sign in
-
-
Java- Automatic Type Promotion of Primitives I am trying to explore and explain the concept of automatic type promotion of primitives using a simple code in java using two byte values: class TestAutomatictypepromotion{ public static void main(String[] ar){ byte a=10; byte b=20; int sum=a+b; System.out.println(sum); }} Perfect! Let me prove the byte → int promotion step by step through actual bytecode analysis.We have the real bytecode. Now let me build the full visual explanation. javac your souce code javap -c TestAutomatictypepromotion.class (we will get the vyte code) The 3 smoking-gun proofs from the actual bytecode Here is the raw javap -c output from your exact code, with the proof highlighted: 0: bipush 10 ← pushes 10 as INT (not byte) 2: istore_1 ← "i" = integer store (no bstore exists!) 3: bipush 20 ← pushes 20 as INT 5: istore_2 ← integer store 6: iload_1 ← "i" = integer load (no bload exists!) 7: iload_2 ← integer load 8: iadd ← "i" = INTEGER add ← THE KEY PROOF 9: istore_3 Proof 1 — iload, not bload: When a and b are loaded from local variable slots, the opcode is iload_1 and iload_2. The i prefix means integer. There is literally no bload instruction in the entire JVM specification. Proof 2 — iadd, not badd: The addition uses iadd. There is no badd opcode. The JVM arithmetic instruction set only has iadd, ladd, fadd, dadd (int, long, float, double). Bytes have no dedicated add — they must become ints first. Proof 3 — println:(I)V: The method descriptor in constant pool entry #13 is println:(I)V. The I is the JVM type descriptor for int. So even println receives an int, not a byte. Why does Java do this? The JVM's operand stack and local variable slots work natively in 32-bit units. Byte, short, char, and boolean values are all widened to int the moment they enter the stack — this is called numeric promotion. It's defined in the JLS §5.6.1 (Unary Numeric Promotion) and §5.6.2 (Binary Numeric Promotion). The JVM spec simply has no byte-level arithmetic opcodes — they were intentionally omitted to keep the instruction set small and the stack 32-bit aligned. Want me to show what happens when the result overflows a byte, or demonstrate why byte c = a + b causes a compile-time error without an explicit cast: Let me cover both — byte overflow AND the compile-time error proof through actual bytecode! byte data type has a range: -128 to 127 Assume we declare the variables llike : byte b=100.byte b=100; then try to add both (byte values) like byte c=a+b; The result :200 is not in byte range. So overflow happens. Compiler will not allow. The compiled and interpreted way in Java is the base for such standard code. Most developers fear the JVM. Java developers understand it. Codeest Software Factory Anirudh Mangore Sandip Magdum Mehvish Fansopkar Mitali Dere Sakshi Randive Shruti Chavan NILESH GHAVATE Shaikh Abdulkhadir Java Recruiting Group,OpenJDK
To view or add a comment, sign in
-
Small concept. Big impact. In Java: byte + byte = int That’s automatic type promotion — and it’s one of those things that silently causes bugs if you don’t fully understand it. Back to basics = better code.
Java- Automatic Type Promotion of Primitives I am trying to explore and explain the concept of automatic type promotion of primitives using a simple code in java using two byte values: class TestAutomatictypepromotion{ public static void main(String[] ar){ byte a=10; byte b=20; int sum=a+b; System.out.println(sum); }} Perfect! Let me prove the byte → int promotion step by step through actual bytecode analysis.We have the real bytecode. Now let me build the full visual explanation. javac your souce code javap -c TestAutomatictypepromotion.class (we will get the vyte code) The 3 smoking-gun proofs from the actual bytecode Here is the raw javap -c output from your exact code, with the proof highlighted: 0: bipush 10 ← pushes 10 as INT (not byte) 2: istore_1 ← "i" = integer store (no bstore exists!) 3: bipush 20 ← pushes 20 as INT 5: istore_2 ← integer store 6: iload_1 ← "i" = integer load (no bload exists!) 7: iload_2 ← integer load 8: iadd ← "i" = INTEGER add ← THE KEY PROOF 9: istore_3 Proof 1 — iload, not bload: When a and b are loaded from local variable slots, the opcode is iload_1 and iload_2. The i prefix means integer. There is literally no bload instruction in the entire JVM specification. Proof 2 — iadd, not badd: The addition uses iadd. There is no badd opcode. The JVM arithmetic instruction set only has iadd, ladd, fadd, dadd (int, long, float, double). Bytes have no dedicated add — they must become ints first. Proof 3 — println:(I)V: The method descriptor in constant pool entry #13 is println:(I)V. The I is the JVM type descriptor for int. So even println receives an int, not a byte. Why does Java do this? The JVM's operand stack and local variable slots work natively in 32-bit units. Byte, short, char, and boolean values are all widened to int the moment they enter the stack — this is called numeric promotion. It's defined in the JLS §5.6.1 (Unary Numeric Promotion) and §5.6.2 (Binary Numeric Promotion). The JVM spec simply has no byte-level arithmetic opcodes — they were intentionally omitted to keep the instruction set small and the stack 32-bit aligned. Want me to show what happens when the result overflows a byte, or demonstrate why byte c = a + b causes a compile-time error without an explicit cast: Let me cover both — byte overflow AND the compile-time error proof through actual bytecode! byte data type has a range: -128 to 127 Assume we declare the variables llike : byte b=100.byte b=100; then try to add both (byte values) like byte c=a+b; The result :200 is not in byte range. So overflow happens. Compiler will not allow. The compiled and interpreted way in Java is the base for such standard code. Most developers fear the JVM. Java developers understand it. Codeest Software Factory Anirudh Mangore Sandip Magdum Mehvish Fansopkar Mitali Dere Sakshi Randive Shruti Chavan NILESH GHAVATE Shaikh Abdulkhadir Java Recruiting Group,OpenJDK
To view or add a comment, sign in
-
𝗝𝗗𝗞 𝘃𝘀 𝗝𝗥𝗘 𝘃𝘀 𝗝𝗩𝗠 Here's what actually happens when you run a Java program — and the parts most engineers never learn: JDK → JRE → JVM → JIT 𝗝𝗗𝗞 (Java Development Kit) Your complete toolbox. Compiler (javac), debugger, profiler, keytool, jshell — and a bundled JRE. Without it, you can't write or compile Java. Just run it. 𝗝𝗥𝗘 (Java Runtime Environment) JVM + standard class libraries. Ships to end users. No compiler. No dev tools. Just enough to run a .jar. 𝗝𝗩𝗠 (Java Virtual Machine) This is where it gets interesting. 𝗧𝗵𝗿𝗲𝗲 𝗹𝗮𝘆𝗲𝗿𝘀 𝗶𝗻𝘀𝗶𝗱𝗲: • Class Loader — loads, links, and initializes .class files at runtime (not all at startup) • Runtime Data Areas — Heap, Stack, Method Area, PC Register, Native Method Stack • Execution Engine — interprets + compiles bytecode 𝗝𝗜𝗧 (Just-In-Time Compiler) Watches your code at runtime. Identifies "hot" methods — those called frequently. Compiles them natively. Skips the interpreter next time. That's how Java catches up to C++ performance on long-running workloads. 𝗪𝗵𝗮𝘁 𝗺𝗼𝘀𝘁 𝗱𝗲𝘃𝘀 𝗺𝗶𝘀𝘀 • 𝗖𝗹𝗮𝘀𝘀 𝗟𝗼𝗮𝗱𝗶𝗻𝗴 𝗶𝘀 𝗹𝗮𝘇𝘆 The JVM doesn't load all classes upfront. It loads them on first use — which is why cold start time differs from steady-state throughput. • 𝗝𝗜𝗧 𝗵𝗮𝘀 𝘁𝗶𝗲𝗿𝘀 HotSpot JVM uses tiered compilation: C1 (fast, light optimization) kicks in first, then C2 (aggressive optimization) takes over for truly hot code. GraalVM replaces C2 entirely with a more powerful compiler. • 𝗧𝗵𝗲 𝗛𝗲𝗮𝗽 𝗶𝘀 𝗻𝗼𝘁 𝗼𝗻𝗲 𝘁𝗵𝗶𝗻𝗴 It's split: Eden → Survivor Spaces → Old Gen → Metaspace (post Java 8). Understanding this is prerequisite to tuning GC and fixing OOM errors. • 𝗦𝘁𝗮𝗰𝗸 𝘃𝘀 𝗛𝗲𝗮𝗽 — 𝗿𝘂𝗻𝘁𝗶𝗺𝗲 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 Every thread gets its own stack. Local primitives live there. Objects always go to the heap. References live on the stack. This is why stack overflows (deep recursion) and heap OOMs are completely different problems. • 𝗝𝗩𝗠 𝗶𝘀 𝗻𝗼𝘁 𝗮𝗹𝘄𝗮𝘆𝘀 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗱 When GraalVM's native-image compiles your app ahead-of-time (AOT), there's no JVM at runtime at all. Instant startup. Fixed memory footprint. Different trade-offs. • 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗯𝗶𝗻𝗮𝗿𝘆 It's an intermediate representation — platform-agnostic instructions the JVM can run on any OS. This is Java's "write once, run anywhere" in practice, not just in theory. 𝗧𝗵𝗲 𝗺𝗲𝗻𝘁𝗮𝗹 𝗺𝗼𝗱𝗲𝗹 𝘁𝗵𝗮𝘁 𝗰𝗹𝗶𝗰𝗸𝘀: • JDK = write + compile + run • JRE = run only • JVM = execution environment • JIT = runtime optimizer What's the JVM internals detail that surprised you most when you first learned it? A special thanks to my faculty Syed Zabi Ulla sir at PW Institute of Innovation for their clear explanations and continuous guidance throughout this topic. #Java #JVM #SoftwareEngineering #BackendDevelopment #ProgrammingFundamentals
To view or add a comment, sign in
-
📅 Day 77 out of 100 — Solving Leet Code Problems Daily, Kickstarting My Java + DSA Journey#100DaysOfCode 📚 📘 Course: Data Structures & Algorithms 📈 One Problem a Day: Consistency Compounding: Solving LeetCode problems daily is helping me strengthen my concepts and improve problem-solving skills step by step. Leetcode Problem_225. Implement Stack using Queues Implement a last-in-first-out (LIFO) stack using only two queues. The implemented stack should support all the functions of a normal stack (push, top, pop, and empty). Implement the MyStack class: void push(int x) Pushes element x to the top of the stack. int pop() Removes the element on the top of the stack and returns it. int top() Returns the element on the top of the stack. boolean empty() Returns true if the stack is empty, false otherwise. Notes: You must use only standard operations of a queue, which means that only push to back, peek/pop from front, size and is empty operations are valid. Depending on your language, the queue may not be supported natively. You may simulate a queue using a list or deque (double-ended queue) as long as you use only a queue's standard operations. Example 1: Input ["MyStack", "push", "push", "top", "pop", "empty"] [[], [1], [2], [], [], []] Output [null, null, null, 2, 2, false] Explanation MyStack myStack = new MyStack(); myStack.push(1); myStack.push(2); myStack.top(); // return 2 myStack.pop(); // return 2 myStack.empty(); // return False Constraints: 1 <= x <= 9 At most 100 calls will be made to push, pop, top, and empty. All the calls to pop and top are valid. https://lnkd.in/di9TWj-K Solution:- class MyStack { private Queue<Integer> q; public MyStack() { q = new LinkedList<>(); } public void push(int x) { q.add(x); for (int i = 0; i < q.size() - 1; i++) { q.add(q.poll()); } } public int pop() { return q.poll(); } public int top() { return q.peek(); } public boolean empty() { return q.isEmpty(); } }
To view or add a comment, sign in
-
-
🛑 Stop hardcoding HQL strings in your Java methods! ✔️Let’s be honest: We’ve all seen (or written) "String Soup." 🍜 ✔️You know the drill—massive HQL strings concatenated inside a DAO method, making the code hard to read and even harder to debug. ✔️If you want cleaner, more professional Hibernate code, it's time to master Named Queries. 🤔 What is a Named Query? ✔️Think of a Named Query as a "Pre-compiled Query Constant." ✔️Instead of defining the query logic at the moment you call it, you define it at the Entity level using annotations. ✔️Hibernate then validates these queries the moment your application starts. 🚀 💻 The Example 1️⃣. The "Clean" Definition Define your query once in your Entity class. It stays organized and out of your business logic. @Entity @NamedQueries({ @NamedQuery( name = "Project.findHighPriority", query = "SELECT p FROM Project p WHERE p.status = :status AND p.priority > 5" ) }) public class Project { @Id private Long id; private String status; private Integer priority; } 2️⃣. The Elegant Execution Calling it is a breeze. No strings, no mess. public List<Project> getTopProjects() { return em.createNamedQuery("Project.findHighPriority", Project.class) .setParameter("status", "ACTIVE") .getResultList(); } 🏆 Why this wins: 1️⃣ Readability: Your Repository layer stays focused on execution, not string manipulation. 2️⃣ Performance:Hibernate parses the query once during initialization, not every time it's called. 3️⃣ Type Safety:It encourages a more structured approach to parameter binding. ❓Is it perfect ➡️ Not for everything. If your WHERE clause needs to change dynamically based on 10 different filters, stick to the Criteria API. But for static, standard lookups? Named Queries are king. 👑 ❓How do you handle your persistence layer? 💎 Named Queries 🛠️ Criteria API ⚡ Raw SQL Let’s hear your take in the comments! 👇 #Java #SoftwareEngineering #Hibernate #SpringBoot #Backend #CleanCode #Programming
To view or add a comment, sign in
-
-
I have designed tool-prompt-generator: A Java library that generates LLM-compatible tool prompts from annotated Java methods. This library enables developers to define tools (functions) using simple annotations and automatically generate structured prompts that LLM models can use to understand and invoke those tools. #OpenSource #GitHub #PromptEngineering #DeveloperTools #AIProductivity #Java #BuildInPublic https://lnkd.in/d8G5-ydM
To view or add a comment, sign in
-
🚀 Java Deep Dive Series — Variables AI helps us write code faster. But understanding how data is stored and behaves in memory is what separates beginners from strong engineers. Today, I revisited: 👉 Java Variables Here’s a quick breakdown 👇 🔹 Primitive Types → 8 types (int, double, etc.) with fixed size & no objects 🔹 Reference Types → Store memory address (objects, arrays, strings) 🔹 Variable Types → Local (stack), Instance (heap), Static (shared) 🔹 Type Conversion → Widening (safe) vs Narrowing (explicit & risky) 🔹 Type Promotion → Smaller types auto-promoted to int in expressions 🔹 Pass by Value → Java is always pass-by-value (even for objects) ⚙️ Deep dive covered: 2’s complement (negative numbers), String pool vs heap, == vs .equals(), wrapper classes (boxing/unboxing), Integer caching (-128 to 127), and memory behavior of variables. 💡 My Key Takeaway: Most bugs are not syntax issues — they come from misunderstanding how data behaves in memory. 📘 I’ve documented detailed notes (with examples) here: 🔗 [https://lnkd.in/dPaPka54] I’ll keep adding more topics as I go. If you're revising Java fundamentals or preparing for interviews, this might help 🤝 #Java #LearningJourney #SoftwareEngineering #BackendDevelopment #Programming #AI
To view or add a comment, sign in
-
Day 10 Today I practiced rotating an array n times from the left. Instead of shifting elements one by one, I used the reversal algorithm, which is more efficient and avoids extra space. Example: Input: {1,2,3,4,5}, n = 3 Output: {4,5,1,2,3} This was a great way to revise array indexing and in-place manipulation. ================================================== // Online Java Compiler // Use this editor to write, compile and run your Java code online import java.util.*; class Main { public static void main(String[] args) { int a []={1,2,3,4,5}; //output:{4,5,1,2,3} int n=3; int k=n%a.length; int left=0; int right=a.length-1; a=reverseArray(a,0,k-1); System.out.println(Arrays.toString(a)); a=reverseArray(a,k,right); System.out.println(Arrays.toString(a)); a=reverseArray(a,left,right); System.out.println("Final result:"+Arrays.toString(a)); } public static int [] reverseArray(int a [],int left , int right) { int temp; while(left < right) { temp=a[left]; a[left]=a[right]; a[right]=temp; left++; right--; } return a ; } } Output:[3, 2, 1, 4, 5] [3, 2, 1, 5, 4] Final result:[4, 5, 1, 2, 3] #AutomationTestEngineer #Selenium #Java #CodingPractice #Arrays #ProblemSolving
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development