Two Years in Dubai: Lessons in Hospitality as a Hotel GM Two years ago, I arrived in Dubai, stepping into one of the most dynamic and competitive hospitality markets in the world. As General Manager of a five-star hotel, I knew the expectations would be high. Today, I reflect on the key lessons I’ve learned about delivering exceptional hospitality in this unique city—one that is arguably at the forefront of the global hospitality industry. ➡️ Exceeding Expectations is the Baseline Dubai redefines luxury. Guests arrive with expectations shaped by the city's reputation for innovation, excellence, and impeccable service. Here, meeting expectations isn’t enough—exceeding them is the norm. From personalized welcomes to anticipating unspoken needs, every detail matters in crafting unforgettable experiences. ➡️ Cultural Sensitivity is Non-Negotiable With visitors and employees from every corner of the world, cultural intelligence is essential. Understanding diverse traditions, communication styles, and service preferences allows for a more personalized and respectful guest experience. Training teams – in our case of 75 nationalities- to be culturally aware ensures seamless interactions and elevated satisfaction. ➡️ Agility Defines Success Dubai’s hospitality and gastronomy moves very fast—trends shift, guest preferences evolve, and market dynamics change rapidly. Staying ahead means embracing agility, whether by integrating new technologies, rethinking service models, or responding to global challenges. Adaptability is key to maintaining a competitive edge. ➡️ A Five-Star Team Creates a Five-Star Experience Exceptional hospitality starts with an exceptional team. Employee engagement, well-being, and recognition directly impact service quality. Investing in training, fostering a strong service culture, and ensuring top-tier staff accommodation are critical in driving performance and morale. Happy teams create happy guests. ➡️ Technology Enhances, but People Deliver While technology plays a growing role in streamlining operations and enhancing convenience, for me true hospitality remains personal. No digital solution can replace looking for the “Golden Nuggets“or an anticipatory customer service of a well-trained team. Balancing tech with human touch ensures efficiency without compromising the emotional connection guests seek. Looking Ahead Dubai continues to evolve, and so does its hospitality landscape. The past two years have reinforced that success in this industry is about staying guest-centric, adaptable, and innovative. As I look forward, one thing remains unchanged—hospitality isn’t just about service; it’s about creating experiences that leave an ever lasting impression. What have been your key learnings in hospitality? I’d love to hear your thoughts! #Hospitality #Hotels #Luxury #WhatInspiresMe
Ethical Innovation Standards
Explore top LinkedIn content from expert professionals.
-
-
5 Ways to Turn US-India Culture Differences Into Collaboration Wins (With Real-World How-To’s) 1. Invest in Cultural Fluency—Not Just Sensitivity What to do: Host “culture exchange” sessions. Invite both teams to share how and why they work the way they do. Example: One company held monthly “Ask Me Anything” calls. India teams asked about the US’s drive for speed. US teams learned why Indian teams seek senior buy-in. Result: Less frustration, more alignment. 2. Blend Directness With Context What to do: Start meetings with clear, direct goals (US style), then invite scenario-based or clarifying questions (India style). Example: In a product launch, the US PM set the objectives, then the India lead explored the “what-ifs.” This led to both faster starts and better coverage of risks. 3. Rotate Meeting Leadership What to do: Don’t let the same side run every meeting. Switch between US and India leads. Example: For weekly standups, the India manager led one week and surfaced local blockers; the US PM led the next, driving focus on customer results. Both perspectives became visible, and engagement soared. 4. Build Feedback Loops That Actually Work What to do: Teach both sides to give feedback in each other’s style—direct, but always constructive. Make feedback a routine, not a surprise. Example: Teams closed every sprint with a “Start/Stop/Continue” check-in. The US team practiced softening feedback; India team practiced being more candid. Trust and psychological safety improved quickly. 5. Celebrate Shared Wins—And Shared Learnings What to do: Shine a spotlight on successes that happened because of your differences. Example: When India’s process rigor averted a risk, it was celebrated in a global town hall. When the US team’s “just try it” mindset led to a breakthrough, that was spotlighted too. Both became team best practices. The best India-US teams don’t just “manage around” culture—they make it their competitive advantage. The next time you hit a bump, ask: are we fighting our differences, or using them to win? What’s one India-US “culture hack” that’s worked for you? Share below—let’s build the new playbook together. Zinnov Amita Goyal Amaresh N. Ashveen Pai Dipanwita Ghosh Mohammed Faraz Khan ieswariya k Komal Shah Hani Mukhey Karthik Padmanabhan Kavita Chakravarthy Rohit Nair Saurabh Mehta Nairuti Sanghavi
-
"This policy brief describes the unique and valuable roles that international standards play in supporting responsible AI development and governance. International standards: • Establish a common language and consensus-built definitions that accelerate innovation by enabling more productive collaboration among AI developers, deployers, governments and regulators, and other important stakeholders. • Set out consensus-driven metrics, benchmarks, and technical requirements that can facilitate transparency, consumer choice, and trade, while remaining adaptable to the diverse contexts in which AI systems are deployed. • Translate high level principles for responsible AI into concrete, actionable steps and technical requirements, supporting effective implementation of responsible AI frameworks. • Offer detailed specifications and guidelines that can be used by regulators to improve the technical rigor and international interoperability of AI-related regulation, improving governance in a way that facilitates trade and eases compliance for AI developers. • Underpin robust conformity assessment procedures that enable verification of technical and organizational requirements, helping to improve the reliability, quality, and trustworthiness of AI systems. In short, international standards provide a technical foundation for advancing trustworthy AI innovation and governance." ... As AI technologies and application contexts continue to evolve, international standards can provide a robust foundation for responsible AI innovation that serves the global public interest. Strengthened collaboration between standards development organizations, national standards bodies, governments and regulators, and civil society can help ensure that AI's transformative potential benefits people around the world while minimizing its risks." ISO - International Organization for Standardization
-
Humanizing AI Through the Kano Model In an era where generative AI has become a ubiquitous offering, true differentiation lies not in merely adopting the technology but in integrating human values into its core. Building on my earlier discussion about applying the Kano Model to Gen AI strategy, let’s explore how this framework can refocus development metrics to prioritize ethics and human-centricity. By aligning AI systems with human needs, organizations can shift from functional tools to trusted partners that inspire lasting loyalty. Traditional metrics such as speed, scalability, and model accuracy have evolved into basic expectations the “must-haves” of AI. What truly elevates a product today is its ability to embody values like safety, helpfulness, dignity, and harmlessness. These qualities, categorized as “delighters” in the Kano Model, transform AI from a transactional tool into a meaningful collaborator. Key Human-Centric Differentiators Safety: Proactive safeguards must ensure AI systems protect users from risks, whether physical, emotional, or societal. Safety is non-negotiable in building trust. Helpfulness: Personalized, context-aware interactions demonstrate empathy. AI should anticipate needs and adapt to individual preferences, turning routine tasks into meaningful experiences. Dignity: Ethical design principles—fairness, transparency, and privacy—must underpin AI development. Respecting user autonomy fosters long-term trust and engagement. Harmlessness: AI outputs and recommendations should prioritize user well-being, avoiding unintended consequences like bias, misinformation, or psychological harm. This human-centered approach represents a paradigm shift in technology development. While traditional KPIs remain important, they are no longer sufficient to stand out in a crowded market. Organizations that embed human values into their AI systems will not only meet user expectations but exceed them, creating emotional connections that drive loyalty. By applying the Kano Model, businesses can systematically align innovation with ethics, ensuring technology serves humanity rather than the other way around. The future of AI isn’t just about efficiency it’s about elevating human potential through thoughtful, responsible design. How is your organization balancing technical excellence with human values?
-
AI isn’t just a powerful tool for accelerating sustainability work. It can also help us move faster in advancing human rights - and we’re piloting AI models that do just that. Amazon has hundreds of thousands of suppliers worldwide - that’s a massive scope. So we’re harnessing AI to keep pace, prevent, and respond to human rights risks in our network. Here are two examples of how that’s taking shape: 🔍 Smarter Risk Prediction: We developed an AI model that can analyze tens of thousands of historical social audits to identify patterns, spot warning signs, and flag high-risk suppliers - essentially helping us zoom in on what matters. The testing results were impressive - the tool successfully identified about 9 out of every 10 high-risk sites, with 85% overall accuracy. ⏱Faster insights: It can take a human rights manager up to four hours to manually review a supplier audit report. But we developed an AI tool that processes a report in just minutes - identifying risks, rating the seriousness, and suggesting next steps. Early versions helped us process audit reports 65% faster - a remarkable difference! It’s important to note - these AI tools aren’t replacing human decision-making. They’re designed to support, enhance and accelerate our work. Every AI recommendation gets reviewed by our experts - and their input actually helps improve the system over time. We’re still in early stages, but I’m inspired by the potential. On this #HumanRightsDay, I invite you to learn more about our work from Devex’s comprehensive interview with Leigh Anne DeWine, our Director of Human Rights & Social Impact, who is making a great impact every day here at Amazon. Thanks to Leigh Anne and our entire Human Rights and Social Impact team for the incredibly critical work you do. 🙏 https://lnkd.in/gSWZAWFB
-
🚨 [AI RESEARCH] "Assessing the (Severity of) Impacts on Fundamental Rights," by Gianclaudio Malgieri & Cristiana Santos, is a must-read read for everyone in AI governance. Quotes: Without straightforward criteria to measure those risks to fundamental rights in practice (both at single user and collective levels), most of the EU digital strategies that demand assessment of impact and risks are meaningless. Eminent examples are the Data Protection Impact Assessment (DPIA) and Data Protection by Design (DPbD) in the GDPR, Fundamental Rights Impact Assessments (FRIA) in the AI Act, or the systemic risk assessment in the Digital Services Act (DSA)." (page 2) - "Reflecting on the core values underlying fundamental rights, i.e. the dignity principle justified by the flourishing of individuals, the severity of interference should be determined based on the core meaning of each fundamental right which might be affected. We should analyse how the “content” of fundamental rights is interfered with and how severely. For instance, privacy rights encompass values such as personal autonomy, human dignity, physical and mental integrity, and identity. Similarly, freedom rights involve personal autonomy, pluralism, and democracy. The full realisation of these values serves as the benchmark for assessing interferences of external activities on these rights and freedoms. To put it simply, we consider that risks (of meaningful interferences) to fundamental rights occur when certain practices might compromise the dignity and the flourishing of individuals. (page 11) - "(...) Our approach aims to overcome the limitations of both harm-based and rights-based methodologies. We do accept the right-based approach, but to make it more operational, we would focus on the severity of “interferences” rather than on “violations.” To measure interferences, we first look at infringements of positive legal rules implementing fundamental rights. To avoid a strong dichotomy, we introduce various parameters to measure the severity of rights infringements from both objective and subjective perspectives. In addition, real-life consequences of those violations matter, but not (only) as losses or damages since we consider a broad range of consequences in terms of changes in life circumstances." (page 11) - "We emphasise the need to avoid ambiguous terms, vague definitions, or self-referential loops when defining the impact on fundamental rights. Recognizing that fundamental rights are not material objects but principles aimed at preserving dignity, we incorporate parameters that reflect their legally, politically, socially, and culturally situated nature." (page 30) ➡ Read the full paper below. 🔥 To stay up to date with the latest developments in AI policy, compliance & regulation - including excellent research, join 34,000+ people who subscribe to my weekly newsletter (link below). #AIResearch #AIGovernance #AICompliance #AIAct #AIRegulation
-
If we’re serious about technology and equity, we have to be serious about access to the internet as a right 🚨 The internet is no longer a luxury… it’s a basic necessity. It’s how we learn, work, connect, and increasingly, how we access healthcare and opportunities. During COVID, the digital divide wasn’t theoretical. It played out in real time: students unable to attend classes, families locked out of telehealth, communities isolated from one another. The result was a widening of disparities that already existed. Now, as AI becomes more integrated into our lives and systems, conversations about equity often focus on bias in algorithms, representation in datasets, or the risks of exclusion from decision-making tables. Those are critical. But if whole communities cannot reliably get online in the first place, they won’t even reach the starting line. Equity in AI, or technology more generally, cannot exist without equity in access.
-
What does equitable access to generative AI look like? In my research for my upcoming book, I interviewed leaders at a bank in Africa who had struggled with high unemployment rates—not because there weren’t jobs available but because the job-seekers there didn’t have the education necessary to fill them. Then came generative AI. Those without the education to take on entry-level roles in the bank’s call centers could now be trained on the basic skills necessary to do those jobs. What happened at this African bank is in line with what’s happening across the board and around the globe. Studies show that although AI boosts productivity among all workers, it improves those in the bottom half of performance by 43 percent, whereas top performers see a more modest 17 percent gain. Generative AI is closing the skills and knowledge gap between competency and excellence. And it has the potential to close the opportunity gap as well. As we design our AI-powered products and services, let’s ensure that there is not only equitable access but also equitable outcomes that address some of the major issues in our organizations and communities.
-
As we begin 2026, one priority is clear: ensuring digital transformation advances rights, inclusion, and opportunity for all. Digital technologies shape how we live, work, and participate in society. But without the right safeguards, they risk deepening inequalities rather than reducing them. UNDP’s new Digital Rights Dashboard, developed with support from the Republic of Korea, is an important step toward addressing this challenge. Piloted in Colombia, Lebanon, Mauritania, North Macedonia and Samoa, it offers a practical and data-driven way to understand and strengthen digital rights ecosystems. What excites me most is its potential. The Dashboard brings together insights from over 140 countries, helping governments, civil society, and partners identify gaps, track progress, and spark the dialogue needed to build digital futures that are safe, inclusive, and grounded in human rights. Congratulations to the teams and partners who made this possible. I look forward to seeing how countries use these insights to shape more equitable digital transformations. Explore the Dashboard: https://lnkd.in/eMqHi2xH Read the report: https://lnkd.in/ezbNcbiJ Read the blog: https://lnkd.in/eS6DDSwK
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development