Sustainable AI: Balancing Environmental, Economic and Societal Impacts
Artificial Intelligence 8 min

What Does Sustainable AI Look Like Today—and Who’s Accountable?

Sustainable AI isn’t just about shrinking carbon footprints—it demands economic inclusivity, fair labor transitions and societal resilience. Leaders of the Senior Executive AI Think Tank explore actionable strategies for enterprises to adopt AI that delivers long-term value, safeguards workers and strengthens social trust.

by AI Editorial Team on November 18, 2025

As artificial intelligence continues its rapid advance—from foundational models to enterprise-scale deployments—questions about sustainability are taking on new urgency. While much of the discourse has centered on the carbon footprint of data centers and model training, sustainable AI must also address long-term economic, labor and societal impacts: How will value from AI be shared? Who bears the downstream risks? Well-designed systems matter not only for performance, but also for fairness, trust and longevity.

The Senior Executive AI Think Tank brings together seasoned experts in machine learning, generative AI and enterprise AI applications who offer deep insight into these challenges and opportunities. Below, they explore what truly sustainable AI looks like—beyond energy metrics—and who should be accountable.

Consider Fair Labor and Economic Sustainability As Well

Chandrakanth Lekkala, Principal Data Engineer at Narwal.ai, argues sustainability requires attention to labor and long-term economic outcomes as much as it does to energy metrics. He says sustainable AI should pay “a lot of attention to fair labor practices, economic sustainability on a long-term level, and positive contributions to society.”

Lekkala also warns that unchecked centralization risks concentrating gains. “AI should prevent the increase of inequality,” he says, adding that investments must include “reskilling efforts supported by social safety nets” so benefits don’t accrue to just a handful of firms.

For executives, that means budgeting explicitly for reskilling, adopting procurement practices that reward equitable outcomes, and structuring product road maps to measure social impact as a first-order KPI. Taken together, these moves turn sustainability from a reputational line item into a repeatable engineering and investment practice.

Make Decisions Explainable and Put People Back in the Loop

Transparency is the glue that holds sustainable AI together. Uttam Kumar, Engineering Manager at American Eagle Outfitters, says societal impact depends on clear, defendable choices: “Sustainable AI’s societal impact requires mandatory transparency in decision-making and investment in human-AI collaboration roles to maintain labor value and societal trust,” he says, adding, “If you cannot explain the decision, you cannot defend the consequence, which is the foundation of institutional trust.”

That phrasing underscores two operational necessities: Enable explainability across models and create explicit human roles for review and remediation. Organizations can make this concrete by publishing decision-logs, creating cross-functional review boards and assigning a Chief AI Officer with explicit accountability for outcomes. Those steps turn opaque automation into auditable workflows that regulators, customers and employees can trust.

“AI’s most incredible legacy won’t be what it predicts, but what it preserves.”

Dileep Rai, Manager Oracle Technology Cloud of HBG, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Dileep Rai, Manager of Oracle Cloud Technology at Hachette Book Group (HBG)

SHARE IT

Make AI Humane—Embed Ethics into the Stack

“Sustainable AI isn’t built in a lab; it’s built in conscience,” says Dileep Rai, Manager of Oracle Cloud Technology at Hachette Book Group (HBG). For Rai, sustainability combines environmental stewardship with dignity for workers and fairness in outcomes. “It’s the algorithm that respects the planet, the model that uplifts the worker, the system that scales fairness as much as accuracy,” he says, quoting an engineer: “We optimized for speed. Now we must optimize for soul.”

That ethos translates into concrete practices: human-centered design reviews, diverse product teams and automated checks that measure worker-impact as well as model accuracy. “AI’s most incredible legacy won’t be what it predicts, but what it preserves,” Rai says.

Design for Efficiency—Then Share Accountability

Reducing wasteful compute is a technical and moral imperative. Charles Yeomans, CEO and Founder of Atombeam, says sustainable AI “means more than green data centers; it’s about designing systems that learn and scale efficiently.” He points to a fundamental inefficiency: The same heavy computations are needlessly repeated. “If 100 people ask an AI the same thing, it still runs the same heavy computation each time,” he says, and argues systems should “remember what’s been learned, and apply it intelligently.”

Finally, Yeomans frames responsibility as collective. “Accountability should be shared. Developers must prioritize efficiency, companies must deploy AI responsibly and policymakers should incentivize innovation that serves both progress and the planet,” he says.

Protect Equity with Governance and Workforce Pathways

Sustainability is also a governance question: Who bears the costs of disruption, and how are benefits distributed? Raghu Para of Ford Motor Company stresses that sustainable AI “must go beyond energy-efficient training and it must sustain trust, equity and economic inclusion over time.” He urges organizations to align model objectives with societal resilience rather than shareholder return alone. “Labor displacement must be met with upskilling pipelines; economic gains must be distributed, not concentrated,” he says, and insists that “responsibility can’t fall on a single actor.”

To operationalize that vision, “Developers, platform providers, regulators and enterprise adopters must share accountability through standards, incentives and enforcement,” Para says. Those steps help ensure technological gains translate into durable, equitable value.

Close the Lifecycle Accountability Gap

Bhubalan Mani, Lead for Supply Chain Technology and Analytics at GARMIN, warns that most organizations still assess AI as a point product rather than a lifecycle system. “Sustainable AI demands we recognize the lifecycle accountability gap,” he says, adding that inference costs and social externalities often exceed what early-stage sustainability metrics capture. Mani calls for “distributed accountability architectures—tiered responsibility where developers secure infrastructure ethics, deployers conduct rights assessments and civil society exercises countervailing power.”

His prescription is practical: Map every AI asset from data ingestion through end-of-life, require algorithmic impact assessments before deployment and publish transparency reports that expose downstream costs. Those mechanisms make it easier to ask not just “Can we build this?” but “Should we deploy this?”—and to allocate the answer across stakeholders.

“This shifts sustainability from pure energy metrics to a focus on societal resource distribution.”

Sarah Choudhary, CEO of Ice Innovations, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Sarah Choudhary, CEO of Ice Innovations

SHARE IT

Democratize Compute—A Path to Broader Benefits

Long-term sustainability also requires widening access to compute and capability. Sarah Choudhary, CEO of Ice Innovations, advocates for a “computational democracy model where quantum-classical hybrid systems democratize access to advanced computing.” 

She argues that alternative architectures could enable smaller organizations to participate meaningfully. “Instead of concentrating AI power in tech giants’ data centers, quantum cloud services could enable distributed, efficient processing for smaller organizations,” she says. “This shifts sustainability from pure energy metrics to a focus on societal resource distribution.”

“It’s about building systems that compound human and economic value without dismantling jobs, destabilizing markets or hollowing out trust.”

Aditya Vikram Kashyap, Vice President, Firmwide Innovation at Morgan Stanley, member of the AI Think Tank, sharing expertise on Artificial Intelligence on the Senior Executive Media site.

– Aditya Vikram Kashyap, Vice President of Firmwide Innovation at Morgan Stanley

SHARE IT

Treat AI as an Institutional Asset

Aditya Vikram Kashyap, Vice President of Firmwide Innovation at Morgan Stanley, says companies that want sustainable AI must elevate it to boardroom priority. “Sustainable AI isn’t about cleaner data centers alone; it’s about building systems that compound human and economic value without dismantling jobs, destabilizing markets or hollowing out trust,” he says. He stresses that “accountability can’t sit with engineers in isolation” and that boards, regulators, investors and civil society must share oversight.

That demands new governance routines: board-level AI risk reviews, sustainability clauses in investment decisions and investor reporting on social externalities. When AI is managed as an asset class with clear societal performance metrics, organizations are far likelier to choose durable, equitable paths.

Make Deployment Transparent and Collaborative

Sustainability requires stakeholders at the table. Roman Vinogradov, VP of Product at Improvado, says companies must “prioritize transparency, ensuring AI benefits all stakeholders, including employees and communities,” and that “accountability lies with tech leaders who design these systems, policymakers who regulate them and consumers who demand ethical practices.”

Operational steps are straightforward: Retrain workers for new roles, invest in diversity to prevent bias and work together with governments to create frameworks that support sustainable AI development. Transparency converts abstract commitments into measurable outcomes and creates civic levers that keep AI aligned with public interest.

What Business Leaders Should Do Next

  • Embed labor-value commitments in AI projects. Allocate resources for worker reskilling and include value-sharing mechanisms so AI benefits don’t all accrue to capital.
  • Design for decision-transparency and human-AI collaboration. Ensure AI outputs are explainable, overseen by a human and that a senior officer is in place for accountability.
  • Architect for efficiency and minimize waste. Select model architectures and infrastructure that reduce redundant compute, optimize resource use and incorporate efficiency KPIs.
  • Adopt equity and governance frameworks from day one. Define fairness and audit criteria for your AI systems, and involve deployers, regulators and social stakeholders in governance.
  • Think of AI as a long-term asset, not merely a cost lever. Embed human, economic and institutional resilience metrics in board-level oversight and investment frameworks.
  • Map full lifecycle accountability. Track hardware, deployment, labor, environmental and community impacts, and build transparency reports and downstream-cost mechanisms.
  • Democratize access to compute and innovation. Explore distributed infrastructure models, federated AI and partnerships to avoid concentration and open pathways for broader participation.
  • Engage stakeholders and collaborate. Involve employees, communities, regulators and customers in AI workplace design, deployment and evaluation to build trust and resilience.
  • Publish transparent metrics and invest in transparency. Report on labor transitions, bias mitigation, value distribution and societal impacts; hold tech teams, policy teams and leadership jointly accountable.

The Road Ahead for Responsible AI

Sustainable AI is far more than an energy-efficiency challenge. It demands that enterprises, policymakers and society address economic inclusion, labor transition, governance frameworks and lifecycle accountability alongside environmental metrics. The leaders of Senior Executive AI Think Tank consistently emphasize that sustainability should be a strategic design choice, not a feature add-on.

The future of AI will be shaped not merely by models and compute—but by values, institutional structures and outcomes. Organizations that treat AI as a generational asset will embed fairness, transparency and shared value into their strategy today—and in doing so ensure that AI’s legacy is about preservation as much as it is prediction.


Copied to clipboard.