Small files densely packed have been the cause of much heartburn, whether you are doing a backup of data or migrating data. #datamigration Migrating each file/object requires multiple I/O regardless of the size of the file/object. If there are 1,000 files each 4KB in size and another 1,000 files each 4MB - the number of I/O required to migrate them are the same but the amount of data migrated will be different. When estimating time to completion, it's not just the network bandwidth or the migration tooling, it's the size and quantity of files/objects being migrated.
Why file size matters in data migration
More Relevant Posts
-
Recent client cleanup results: - 1.3M unnecessary session rows removed - 68% query time reduction - Page loads: 2.8s → 1.1s - Conversion rate: +1.7% in 24 hours - Regular database optimisation = better revenue.
To view or add a comment, sign in
-
-
If I had to optimize latency, here are 12 rules I'd consider: 1 Database index 2 Compress payload 3 Group requests 4 HTTP/2 parallel requests 5 CDN 6 Reduce external dependencies 7 Load balancer 8 Scale vertically 9 Cache 10 Connection pooling 11 Message queue 12 Efficient data serialization This is just a simple guide to reducing latency. What else should make this list?
To view or add a comment, sign in
-
-
The Commit That Took Down Our Staging Server (and What I Learned) How a single line of "harmless" configuration change taught me everything about deployment pipelines, rollback strategies, and the art of not panicking After DATABASE_POOL_SIZE=1000 "More connections = better performance, right?" DATABASE_POOL_SIZE_PROD=100 Lesson 4: Monitoring Is Everything We had no alerts for database connection exhaustion. The first sign of trouble was complete system failure. Solution implemented: Alerts for connection pool usage, memory consumption, and database health. The Recovery (3:30 PM - 4:30 PM) Emergency database restart (scary, but necessary) Manual config rollback (bypassing our normal pipeline) Service-by-service restart (watching everything come back to life) Client demo (somehow happened on time, using production data) The Aftermath: Building Better Systems New Deployment Rules: Configuration changes require peer review (just like code) Infrastructure changes get tested in isolation first Rollback procedures are tested monthly Resource limits are https://lnkd.in/gHGNKnkm
To view or add a comment, sign in
-
Meaning of N+1 redundancy in Data center?? N” = the number of components required to handle the normal (peak) load. • “+1” = one additional (backup) component for redundancy. So, N+1 means there is one extra backup unit available to take over if any single component fails. 🔹 Example Suppose your data center needs 4 UPS/Generator/Chiller units to support its full IT load. In an N+1 configuration, you would install: 4 (needed) + 1 (backup) = 5 units If one UPS/Generator/Chiller fails or is under maintenance, the other four can still carry the entire load.
To view or add a comment, sign in
-
-
My current biggest time sink is Supabase's lack of true down migrations. `migration down` deletes all data, which essentially is like doing `db reset`.
To view or add a comment, sign in
-
If you want to reduce latency, learn these 10 rules: 1 Database index 2 Payload compression 3 Request grouping 4 CDN 5 Load balancer 6 Vertical scaling 7 Cache 8 Connection pooling 9 Message queue 10 Data serialization
To view or add a comment, sign in
-
-
🚀 SQL Server AlwaysOn Migration Lessons Learned 🚀 Recently, I was involved in a large-scale migration project for an environment running SQL Server with AlwaysOn Availability Groups. The VMs were hefty and needed to be moved across vCenters — not a small feat. 🔧 Our approach: Switched AG to asynchronous mode Stopped and disabled SQL services Proceeded with VM migration Everything went smoothly… until the secondary node (after 8 days of migration) failed to rejoin the cluster. It threw network and connectivity errors from the new vCenter. 🕵️♂️ We ran PowerShell Test-NetConnection — network looked fine. But the real breakthrough came from digging into the cluster logs. ✅ Resolution: Stopped SQL services on the secondary Evicted the node from the cluster Rejoined it cleanly Restarted SQL services Boom 💥 — AG started syncing immediately. The 8-day data lag was resolved in just 9 hours. We then flipped AG back to synchronous mode. 💡 This experience saved us serious time when the current primary became secondary and underwent the same migration. If you're planning a similar AlwaysOn migration across vCenters, keep this workflow in mind — especially the cluster rejoin strategy. It might just save your weekend 😄 #SQLServer #AlwaysOn #Migration #HighAvailability #vCenter #ClusterManagement #SysAdminTips #DatabaseOps
To view or add a comment, sign in
-
𝐓𝐡𝐞 𝐀𝐫𝐭 𝐨𝐟 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧 𝐏𝐨𝐨𝐥𝐢𝐧𝐠 Behind every high-performing backend is the connection pool. When your application accesses the database, it does not create a new connection each time. Creating connections is expensive. The pool keeps a set of ready connections that can be reused, making your application faster and more efficient. 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐚 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧 𝐏𝐨𝐨𝐥 It is a cache of database connections that are kept open and ready for use. This avoids the cost of repeatedly opening and closing connections. 𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐚𝐧𝐝 𝐌𝐚𝐱𝐢𝐦𝐮𝐦 𝐏𝐨𝐨𝐥 𝐒𝐢𝐳𝐞 The minimum idle connections are always kept ready even when traffic is low. The maximum pool size limits the total number of connections. If all connections are busy, new requests wait until a connection becomes available. 𝐈𝐝𝐥𝐞 𝐓𝐢𝐦𝐞𝐨𝐮𝐭 𝐚𝐧𝐝 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧 𝐓𝐢𝐦𝐞𝐨𝐮𝐭 Idle timeout determines how long unused connections remain in the pool before being closed. Connection timeout is how long a request waits to get a connection from the pool before failing. 𝐕𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐧𝐠 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧𝐬 Each connection is checked before use to make sure it is alive. This prevents errors like closed connections in production. 𝐓𝐮𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 𝐏𝐨𝐨𝐥 Too few connections can create bottlenecks. Too many can overload the database. The right balance depends on your workload, CPU, and query speed. 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧 𝐋𝐢𝐟𝐞𝐭𝐢𝐦𝐞 Connections should be recycled after a certain time to prevent stale or broken connections. Monitoring Keep track of active and idle connections. If active connections stay near the maximum for long periods, it may be time to optimize queries or scale the system. Connection pooling is not just a configuration. It is one of the most important factors in application performance and stability. #Database #DatabasePerformance #ConnectionPooling #HikariCP #BackendDevelopment #PerformanceTuning #TechTips #SoftwareEngineering #JDBC
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development