🚀 My One-Year Journey in the AI World 🤖 Over the past year, I’ve been deeply exploring the world of Artificial Intelligence — understanding concepts like AI Agents, Agentic AI, LLMs, Machine Learning, and Data Science. During this journey, my constant companion has been Python 🐍, and I’ve truly come to appreciate its simplicity, power, and versatility. I also learned how to implement the entire Data Science lifecycle using pure Python — without relying on libraries like Pandas, NumPy, or Matplotlib. This experience helped me understand what’s really happening “under the hood” of the tools we often take for granted — from data loading, cleaning, and exploration to reporting and serialization. Here are some key learnings I’d like to share 👇 🧠 Core Python Concepts Operator Precedence (PEMDAS): Parentheses → Exponents → Multiplication/Division → Addition/Subtraction. Match-Case Statements: Introduced in Python 3.10 — Python’s version of switch-case. Tuples: Immutable, ordered collections; great for fixed datasets like coordinates or records. Encapsulation: Controlling access to data using getters and setters for better data hiding. Lambda Functions: Short, anonymous functions for quick operations. JSON Serialization: Converting Python objects into JSON strings for API or storage needs. ⚙️ Why NumPy Matters While Python lists are flexible, they’re not efficient for large-scale numerical operations. NumPy brings superpowers to Python: 🚀 Speed: Optimized C-based backend — much faster than loops. 💾 Memory Efficiency: Continuous memory blocks reduce overhead. 🔁 Vectorization: Perform operations on entire arrays instead of iterating. 🔍 Broadcasting: Enables operations between arrays of different shapes seamlessly. Understanding multidimensional indexing and axis operations in NumPy was a game-changer for me — it’s the backbone of modern Data Science and ML computations. 🧩 Power of Pandas Pandas makes structured data handling elegant and efficient: 📊 Series: 1D labeled arrays (like a single column in Excel). 🧮 DataFrame: 2D labeled tables (like spreadsheets or SQL tables). It simplifies data manipulation, cleaning, and analysis, helping you write less code, save time, and gain readable, expressive insights. 💡 Key Takeaway: Understanding the basics of Python — lists, sets, dictionaries, loops, and functions — before jumping into frameworks like NumPy or Pandas helps you become a better programmer. It teaches you why these libraries exist and how they make Python so powerful for AI and Data Science. 🔥 It’s been an incredible journey so far...... #Python #AI #MachineLearning #DataScience #ArtificialIntelligence #LLM #AgenticAI #LearningJourney #NumPy #Pandas #Programming #OpenSource
My Year in AI: Python, NumPy, Pandas, and More
More Relevant Posts
-
🐍 Python Libraries Every Data Scientist Should Know in 2025 Python is the heart of Data Science ❤️ But what makes Python truly powerful are its libraries — pre-built tools that make complex tasks super simple. From cleaning data to building machine learning models, these libraries are what turn Python into a data powerhouse. Let’s explore the most important ones 👇 📊 1️⃣ NumPy – The Foundation of Data Science Think of NumPy as the base layer of all data work in Python. 🔹 Handles large, multi-dimensional arrays and matrices. 🔹 Enables fast numerical computations — way faster than normal Python lists! 💡 Used for: Mathematical operations, matrix calculations, and data preprocessing. 🧹 2️⃣ Pandas – Data Wrangling Made Easy Pandas is every data scientist’s best friend. 🔹 Helps organize data into tables (DataFrames) for easy analysis. 🔹 Handles missing values, filtering, grouping, and merging with ease. 💡 Used for: Cleaning, preparing, and exploring datasets. 📈 3️⃣ Matplotlib & Seaborn – Visualize the Story Data without visuals = half the story. 🔹 Matplotlib gives full control to create custom charts and plots. 🔹 Seaborn builds on Matplotlib, making beautiful visuals with fewer lines of code. 💡 Used for: Charts, correlations, and trend visualization. 🤖 4️⃣ Scikit-Learn – The ML Workhorse Scikit-Learn is the go-to library for machine learning beginners and pros. 🔹 Offers tools for classification, regression, clustering, and model evaluation. 🔹 Easy-to-use syntax and integration with Pandas + NumPy. 💡 Used for: Building predictive models quickly and efficiently. 🧠 5️⃣ TensorFlow & PyTorch – Powering Deep Learning The giants of AI! 🔹 TensorFlow (by Google) and PyTorch (by Meta) handle neural networks and deep learning. 🔹 Support GPU acceleration for large-scale model training. 💡 Used for: Image recognition, NLP, chatbots, and advanced AI models. 🧮 6️⃣ SciPy – Advanced Mathematics Simplified Built on top of NumPy, SciPy adds power for scientific and technical computing. 🔹 Offers modules for optimization, integration, interpolation, and signal processing. 💡 Used for: Engineering, simulations, and advanced mathematical modeling. 🌐 7️⃣ Requests & BeautifulSoup – For Data Collection Sometimes, data isn’t ready-made — you need to go get it! 🔹 Requests helps fetch data from APIs or websites. 🔹 BeautifulSoup is great for web scraping and extracting data from HTML. 💡 Used for: Collecting real-world data for analysis. 🚀 8️⃣ Statsmodels – For Statistical Analysis If you love numbers and patterns, Statsmodels is your tool. 🔹 Performs hypothesis testing, regression analysis, and time series forecasting. 💡 Used for: Statistical modeling and econometrics. 🧰 In a Nutshell: Python libraries make Data Science faster, smarter, and easier. Each one serves a unique purpose — together, they empower data scientists to turn data into knowledge. 📊💡 #CupuleChicago #analyticssolution #cupulegwalior #cupuleeducation
To view or add a comment, sign in
-
Machine Learning Project! 🚀 Built a Tip Predictor with Python & Scikit-learn. I'm thrilled to share my first-ever end-to-end machine learning project! 💡 As I take my first steps into the world of data science, I decided to start with a classic but fascinating problem: Can we predict a restaurant 'tip' based on the 'total_bill'? For this project, I used the popular 'tips' dataset available directly from the Seaborn library. It's a fantastic dataset that includes total_bill, tip, sex, smoker, and other details. My Workflow: Data Loading & Exploration (EDA): I loaded the data using Python's Pandas library and took a first look at its structure with df.head(). Visualization: Before building a model, I wanted to see the data. I used Seaborn's lmplot to visualize the relationship between 'total_bill' and 'tip'. As you can see from the graph on Page 2, there's a clear 'positive linear relationship'—as the bill goes up, the tip generally goes up too! This visualization gave me the confidence that a 'Simple Linear Regression' model would be a good fit. Model Building & Training: This is where Scikit-learn (sklearn) came into play. I defined my feature X (what we use to predict) as the 'total_bill' and my target y (what we want to predict) as the 'tip'. I initialized the LinearRegression() model and trained it on my data using model.fit(x, y). Prediction & Results: After training, the model was ready to make predictions! I tested it with a few new values: For a $50.00 bill, my model predicted a tip of ~$6.17. For a $35.24 bill, the model predicted a tip of ~$4.62. My Learnings: This project was a fantastic experience to understand the complete machine learning workflow, from loading data to making a real prediction. It taught me how data visualization is crucial for guiding model selection and how Scikit-learn simplifies complex mathematical modeling into just a few lines of code. This is a small but important first step in my data science journey, and I'm excited to build on this foundation with more complex projects! What was your first machine learning project? Or what are you working on right now? I'd love to hear about it in the comments! 👇 #datascience #machinelearning #python #scikitlearn #linearregression #datavisualization #pandas #seaborn #project #beginner #dataanalysis #ai #datasciencejourney #portfolioproject #analytics #machinelearningprojects
To view or add a comment, sign in
-
🎯 Python is the world's #1 language. ⚡ But why does it dominate AI & Big Data? 🔍 The Problem: Developers and data scientists waste countless hours "reinventing the wheel," writing complex math functions, ML algorithms, or web handlers from scratch. 💡 Why it Matters: In a rapid go-to-market world, development velocity and iteration speed are more critical than raw performance. Lacking standardized tools slows innovation to a crawl. ❓ The Question: How does Python dominate diverse fields (Web, AI) when other languages specialize? ❓ 🏃♂️ The Approach: Instead of one monolithic "do-it-all" framework, the Python community adopted a modular approach: building and maintaining highly specialized, open-source, and robust libraries for EVERY specific task. 📈 The Result: The birth of a "Golden Ecosystem" of dominant libraries. Here are the Top 20 "titans," categorized by their function: 💻 Data Science & Scientific Computing 1. Numpy: The fundamental package for scientific computing and N-dimensional arrays. 2. Scipy: Extends Numpy with algorithms for optimization, linear algebra & signal processing. 3. Matplotlib: The original 2D/3D plotting and data visualization library. 4. Pandas: Powerful data manipulation and analysis, built around the DataFrame. 5. Sympy: The go-to library for symbolic mathematics (algebra). 🤖 Machine Learning & Deep Learning 6. Keras: A high-level, user-friendly API for building neural networks. 7. TensorFlow: Google's end-to-end platform for Machine Learning. 8. PyTorch: TensorFlow's main competitor, loved for its flexibility and dynamic graphs. 9. Theano: (A pioneer) Optimized mathematical expressions. 10. Scikit-learn: The "goto" library for classical Machine Learning algorithms. 🌐 Web Development & APIs 11. Requests: "HTTP for Humans"—simplifying HTTP requests. 12. Scrapy: A powerful framework for web scraping and crawling. 13. Nose: A framework that makes testing (unit tests) easier. 14. Flask: A flexible micro-framework for quickly building web apps and APIs. 15. Django: The "batteries-included" full-stack framework for complex web applications. 16. Falcon: A high-performance framework specifically for building web APIs. 🖼️ Gaming, Media & Computer Vision 17. PyGame: The classic library for 2D game development. 18. PyGlet: A multimedia library for game development and graphical applications. 19. Pillow (PIL): The "friendly fork" of PIL, essential for basic image processing. 20. OpenCV (Python): The #1 library for real-time Computer Vision. 🎯 The Breakthrough: Not one library, but interoperability. The real power is combining them: (e.g., Pandas + Numpy + Scikit-learn + Matplotlib). 💰 ROI: Massively reduced time-to-market and easier hiring. 💡 Key Takeaway: Python's power isn't the language itself; it's the ecosystem that lets you "stand on the shoulders of giants." 👇 Which of these libraries have you applied in your projects? Share it below!
To view or add a comment, sign in
-
📊 Day 6-7: Data Structures — How Python Stores ML Data Even though I've worked with Python before, I'm brushing up on fundamentals for my AI/ML journey — strong basics make everything easier later. Today I revisited data structures — how Python organizes and stores data. Key realizations: • Lists = store your datasets and predictions • Dictionaries = save model settings and results • Sets = find unique values (like class labels) • Tuples = store data that shouldn't change • List comprehension = transform data in one clean line It clicked — ML is just organizing data in these structures and processing it. What I practiced: ✅ Strings (slicing, methods) ✅ Lists (add, remove, sort) ⭐⭐⭐ ✅ Tuples (when to use them) ✅ Dictionaries (storing key-value pairs) ⭐⭐⭐ ✅ Sets (removing duplicates) ✅ Arrays (2D lists) ✅ List comprehension (filtering) ⭐⭐⭐ Built two practical examples: Example 1 - Model Comparison: results = [ {"model": "CNN", "accuracy": 0.92, "loss": 0.08}, {"model": "RNN", "accuracy": 0.88, "loss": 0.12}, {"model": "LSTM", "accuracy": 0.94, "loss": 0.06} ] # Filter high performers using list comprehension high_performers = [r['model'] for r in results if r['accuracy'] >= 0.9] # Find best model best = max(results, key=lambda x: x['accuracy']) Example 2 - Dataset Organization: dataset = { "train": {"images": [...], "labels": [0, 1, 0], "size": 3}, "test": {"images": [...], "labels": [1, 0], "size": 2} } # Find unique classes using sets for split, data in dataset.items(): unique_labels = set(data['labels']) print(f"{split}: {sorted(unique_labels)}") This combines dictionaries, lists, sets, and list comprehension — exactly how you organize data in real ML projects. Revisiting basics feels right. Understanding these structures well makes reading ML libraries much easier. Follow along and learn with me! Code below 👇 #MachineLearning #AI #Python #DataScience #DeepLearning #LearningInPublic #AspiringAIEngineer
To view or add a comment, sign in
-
Life is Short, I Use Python! Here’s why Python rules every corner of tech — from data science to automation Data Manipulation Polars | Vaex | CuPy | NumPy Effortlessly handle massive datasets with lightning-fast performance. Data Visualization Plotly | Seaborn | Altair | Folium | Geoplotlib | Pygal Turn raw data into beautiful, interactive visual stories. Statistical Analysis SciPy | PyMC3 | Statsmodels | PyStan | Lifelines | Pingouin Perform hypothesis testing, regression, and probability modeling. Machine Learning TensorFlow | PyTorch | Scikit-learn | XGBoost | JAX | Keras Build, train, and deploy smart ML models for real-world problems. Natural Language Processing spaCy | NLTK | Bert | TextBlob | Polyglot | Pattern | Genism Teach machines to understand human language with ease. Time Series Analysis Prophet | Sktime | AutoTS | Darts | Kats | Bifesh Predict trends and forecast future events using time-based data. Database Operations Dask | PySpark | Ray | Koalas | Hadoop Manage and process distributed data like a pro. Web Scraping Beautiful Soup | Scrapy | Octoparse Extract valuable insights from the web automatically. Why Python? Because it’s powerful, flexible, beginner-friendly, and unstoppable in AI, data, and automation. 𝐇𝐞𝐫𝐞 𝐚𝐫𝐞 𝐭𝐡𝐞 𝟏𝟖 𝐛𝐞𝐬𝐭 𝐟𝐫𝐞𝐞 𝐜𝐨𝐮𝐫𝐬𝐞𝐬. 1. Data Science: Machine Learning Link: https://lnkd.in/gUNVYgGB 2. Introduction to computer science Link: https://lnkd.in/gR66-htH 3. Introduction to programming with scratch Link: https://lnkd.in/gBDUf_Wx 3. Computer science for business professionals Link: https://lnkd.in/g8gQ6N-H 4. How to conduct and write a literature review Link: https://lnkd.in/gsh63GET 5. Software Construction Link: https://lnkd.in/ghtwpNFJ 6. Machine Learning with Python: from linear models to deep learning Link: https://lnkd.in/g_T7tAdm 7. Startup Success: How to launch a technology company in 6 steps Link: https://lnkd.in/gN3-_Utz 8. Data analysis: statistical modeling and computation in applications Link: https://lnkd.in/gCeihcZN 9. The art and science of searching in systematic reviews Link: https://lnkd.in/giFW5q4y 10. Introduction to conducting systematic review Link: https://lnkd.in/g6EEgCkW 11. Introduction to computer science and programming using python Link: https://lnkd.in/gwhMpWck 12. Introduction to computational thinking and data science Link: https://lnkd.in/gfjuDp5y 13. Becoming an Entrepreneur Link: https://lnkd.in/gqkYmVAW 14. High-dimensional data analysis Link: https://lnkd.in/gv9RV9Zc 15. Statistics and R Link: https://lnkd.in/gUY3jd8v 16. Conduct a literature review Link: https://lnkd.in/g4au3w2j 17. Systematic Literature Review: An Introduction Link: https://lnkd.in/gVwGAzzY 18. Introduction to systematic review and meta-analysis Link: https://lnkd.in/gnpN9ivf Follow MD AZIZUL HAQUE for more #Python #DataScience #MachineLearning #NLP #BigData #ProgrammingAssignmentHelper
To view or add a comment, sign in
-
-
✅ *Python for Data Science – Part 4: Scikit-learn Interview Q&A* 🤖📈 *1. What is Scikit-learn?* A powerful Python library for machine learning. It provides tools for classification, regression, clustering, and model evaluation. *2. How to train a basic model in Scikit-learn?* ```python from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X_train, y_train) ``` *3. How to make predictions?* ```python predictions = model.predict(X_test) ``` *4. What is train_test_split used for?* To split data into training and testing sets. ```python from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) ``` *5. How to evaluate model performance?* Use metrics like accuracy, precision, recall, F1-score, or RMSE. ```python from sklearn.metrics import accuracy_score accuracy_score(y_test, predictions) ``` *6. What is cross-validation?* A technique to assess model performance by splitting data into multiple folds. ```python from sklearn.model_selection import cross_val_score cross_val_score(model, X, y, cv=5) ``` *7. How to standardize features?* ```python from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` *8. What is a pipeline in Scikit-learn?* A way to chain preprocessing and modeling steps. ```python from sklearn.pipeline import Pipeline pipe = Pipeline([('scaler', StandardScaler()), ('model', LinearRegression())]) ``` *9. How to tune hyperparameters?* Use GridSearchCV or RandomizedSearchCV. ```python from sklearn.model_selection import GridSearchCV grid = GridSearchCV(model, param_grid, cv=5) ``` *🔟 What are common algorithms in Scikit-learn?* - LinearRegression - LogisticRegression - DecisionTreeClassifier - RandomForestClassifier - KMeans - SVM 💬 *Double Tap ❤️ For More!*
To view or add a comment, sign in
-
✅ *Python for Data Science – Part 4: Scikit-learn Interview Q&A* 🤖📈 *1. What is Scikit-learn?* A powerful Python library for machine learning. It provides tools for classification, regression, clustering, and model evaluation. *2. How to train a basic model in Scikit-learn?* ```python from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X_train, y_train) ``` *3. How to make predictions?* ```python predictions = model.predict(X_test) ``` *4. What is train_test_split used for?* To split data into training and testing sets. ```python from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) ``` *5. How to evaluate model performance?* Use metrics like accuracy, precision, recall, F1-score, or RMSE. ```python from sklearn.metrics import accuracy_score accuracy_score(y_test, predictions) ``` *6. What is cross-validation?* A technique to assess model performance by splitting data into multiple folds. ```python from sklearn.model_selection import cross_val_score cross_val_score(model, X, y, cv=5) ``` *7. How to standardize features?* ```python from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` *8. What is a pipeline in Scikit-learn?* A way to chain preprocessing and modeling steps. ```python from sklearn.pipeline import Pipeline pipe = Pipeline([('scaler', StandardScaler()), ('model', LinearRegression())]) ``` *9. How to tune hyperparameters?* Use GridSearchCV or RandomizedSearchCV. ```python from sklearn.model_selection import GridSearchCV grid = GridSearchCV(model, param_grid, cv=5) ``` *🔟 What are common algorithms in Scikit-learn?* - LinearRegression - LogisticRegression - DecisionTreeClassifier - RandomForestClassifier - KMeans - SVM
To view or add a comment, sign in
-
✅ *Python for Data Science – Part 4: Scikit-learn Interview Q&A* 🤖📈 *1. What is Scikit-learn?* A powerful Python library for machine learning. It provides tools for classification, regression, clustering, and model evaluation. *2. How to train a basic model in Scikit-learn?* ```python from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(X_train, y_train) ``` *3. How to make predictions?* ```python predictions = model.predict(X_test) ``` *4. What is train_test_split used for?* To split data into training and testing sets. ```python from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) ``` *5. How to evaluate model performance?* Use metrics like accuracy, precision, recall, F1-score, or RMSE. ```python from sklearn.metrics import accuracy_score accuracy_score(y_test, predictions) ``` *6. What is cross-validation?* A technique to assess model performance by splitting data into multiple folds. ```python from sklearn.model_selection import cross_val_score cross_val_score(model, X, y, cv=5) ``` *7. How to standardize features?* ```python from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` *8. What is a pipeline in Scikit-learn?* A way to chain preprocessing and modeling steps. ```python from sklearn.pipeline import Pipeline pipe = Pipeline([('scaler', StandardScaler()), ('model', LinearRegression())]) ``` *9. How to tune hyperparameters?* Use GridSearchCV or RandomizedSearchCV. ```python from sklearn.model_selection import GridSearchCV grid = GridSearchCV(model, param_grid, cv=5) ``` *🔟 What are common algorithms in Scikit-learn?* - LinearRegression - LogisticRegression - DecisionTreeClassifier - RandomForestClassifier - KMeans - SVM 💬 *Double Tap ❤️ For More!*
To view or add a comment, sign in
-
Master Python collections in one glance! Here’s how each data type behaves 1️⃣ String • Immutable • Ordered / Indexed • Allows duplicates • Example: "Techie" • Stores: only characters • Empty string: "" 2️⃣ List • Mutable • Ordered / Indexed • Allows duplicates • Example: ["Techie"] • Stores: any datatype (str, int, set, tuple, etc.) • Empty list: [] 3️⃣ Tuple • Immutable • Ordered / Indexed • Allows duplicates • Example: ("Techie") • Stores: any datatype (str, int, list, dict, etc.) • Empty tuple: () 4️⃣ Set • Mutable • Unordered • No duplicates allowed • Example: {"Techie"} • Stores: any datatype except list, set, dict • Empty set: set() 5️⃣ Dictionary • Mutable • Unordered • No duplicate keys allowed • Example: {"Techie": 1} • Keys: int, str, tuple • Values: any datatype (str, list, set, dict) • Empty dict: {} Pro Tip: Use Lists when order matters, Sets for unique data, and Dictionaries for key-value pairs. Strings and Tuples are best for fixed data. I searched 300 free courses, so you don't have to. Here are the 35 best free courses. 1. Data Science: Machine Learning Link: https://lnkd.in/gUNVYgGB 2. Introduction to computer science Link: https://lnkd.in/gR66-htH 3. Introduction to programming with scratch Link: https://lnkd.in/gBDUf_Wx 3. Computer science for business professionals Link: https://lnkd.in/g8gQ6N-H 4. How to conduct and write a literature review Link: https://lnkd.in/gsh63GET 5. Software Construction Link: https://lnkd.in/ghtwpNFJ 6. Machine Learning with Python: from linear models to deep learning Link: https://lnkd.in/g_T7tAdm 7. Startup Success: How to launch a technology company in 6 steps Link: https://lnkd.in/gN3-_Utz 8. Data analysis: statistical modeling and computation in applications Link: https://lnkd.in/gCeihcZN 9. The art and science of searching in systematic reviews Link: https://lnkd.in/giFW5q4y 10. Introduction to conducting systematic review Link: https://lnkd.in/g6EEgCkW 11. Introduction to computer science and programming using python Link: https://lnkd.in/gwhMpWck 12. Introduction to computational thinking and data science Link: https://lnkd.in/gfjuDp5y 13. Becoming an Entrepreneur Link: https://lnkd.in/gqkYmVAW 14. High-dimensional data analysis Link: https://lnkd.in/gv9RV9Zc 15. Statistics and R Link: https://lnkd.in/gUY3jd8v 16. Conduct a literature review Link: https://lnkd.in/g4au3w2j 17. Systematic Literature Review: An Introduction Link: https://lnkd.in/gVwGAzzY 18. Introduction to systematic review and meta-analysis Link: https://lnkd.in/gnpN9ivf 19. Creating a systematic literature review Link: https://lnkd.in/gbevCuy6 20. Systematic reviews and meta-analysis Link: https://lnkd.in/ggnNeX5j 21. Research methodologies Link: https://lnkd.in/gqh3VKCC 22. Quantitative and Qualitative research for beginners Link: https://shorturl.at/uNT58 Follow SARMIN AKTER for more #Python #DataTypes #CheatSheet #ProgrammingAssignmentHelper
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Fantastic! Thank you for sharing your journey, Satyadeep. Very Inspiring and encouraging!