The $2 Trillion Banking AI Opportunity: What Financial Institutions Are Actually Deploying in 2026
McKinsey projects AI could add $2 trillion in annual value to banking. Here are the five clearest ROI use cases financial institutions are deploying right now in 2026.
The $2 Trillion Banking AI Opportunity: What Financial Institutions Are Actually Deploying in 2026
McKinsey's 2025 Global Banking Annual Review estimated that generative AI and advanced analytics could add between $200 billion and $340 billion in annual value to the global banking industry through productivity improvements alone. When including revenue generation, risk reduction, and new product opportunities, the total addressable value expands to approximately $2 trillion annually. That is not a theoretical ceiling -- it is the sum of identified, quantifiable use cases across retail banking, commercial lending, wealth management, capital markets, and back-office operations.
Yet the gap between potential and deployment remains wide. A Q1 2026 survey by Accenture found that 91% of banking executives consider AI a strategic priority, but only 23% have moved beyond pilot programs into production deployment. The barriers are familiar: regulatory uncertainty, legacy infrastructure, talent shortages, and the difficulty of proving ROI before committing significant capital.
This article cuts through the hype to examine the five use cases where banks are actually deploying AI at scale in 2026, the real ROI numbers they are reporting, and the strategic decisions that separate institutions capturing value from those still running pilots. Whether you are a tier-1 global bank or a community institution with $500 million in assets, the deployment landscape has matured enough to provide clear guidance on where to invest.
Use Case 1: Real-Time Fraud Detection
Fraud detection is the most mature AI application in banking and the one with the clearest, most immediately measurable ROI. Every major global bank now uses some form of AI-powered fraud detection, and the technology has advanced significantly from the rules-based systems of the 2010s.
The Scale of the Problem
Global payment fraud losses are estimated to reach $48.8 billion in 2026, according to Nilson Report projections. Card-not-present fraud, which includes e-commerce and digital wallet transactions, accounts for approximately 73% of all card fraud. The volume and velocity of transactions make human-only fraud detection economically impossible -- a large bank processes millions of transactions per day, each requiring real-time assessment.
What AI Fraud Detection Looks Like in 2026
Modern AI fraud detection systems have evolved through three generations:
| Generation | Technology | Capability | Limitation |
|---|---|---|---|
| Gen 1 (2010-2018) | Rules-based + simple ML | Pattern matching against known fraud types | High false positive rates (3-5%), slow adaptation to new fraud patterns |
| Gen 2 (2018-2024) | Deep learning + graph networks | Behavioral modeling, network analysis, anomaly detection | Requires large training datasets, limited explainability |
| Gen 3 (2024-present) | GenAI-enhanced + real-time adaptive | Natural language fraud pattern description, synthetic data generation, self-tuning models | Higher compute cost, regulatory review of model changes |
The current generation combines several AI technologies:
Behavioral biometrics. AI analyzes how a user interacts with their banking app -- typing speed, swipe patterns, device angle, time-of-day patterns -- to build a behavioral fingerprint. Deviations from the fingerprint trigger additional authentication or transaction holds.
Graph neural networks. AI maps relationships between accounts, merchants, devices, and transaction patterns to identify fraud rings that would be invisible to transaction-level analysis. JPMorgan Chase reported in early 2026 that its graph-based fraud detection system identified $150 million in previously undetectable fraud ring activity in its first year of deployment.
Generative AI for synthetic fraud data. One persistent challenge in fraud detection is the scarcity of fraud examples in training data (fraud represents less than 0.1% of transactions). Banks are now using generative AI to create synthetic fraud scenarios that train detection models on attack patterns that have not yet occurred in the real world.
Large language models for alert triage. When fraud detection systems generate alerts, analysts must review them and make disposition decisions. LLMs now assist by summarizing the evidence for each alert, identifying similar historical cases, and recommending actions. This reduces analyst review time by 40-60% per alert.
ROI Data from Production Deployments
| Metric | Pre-AI Baseline | With Gen 3 AI | Improvement |
|---|---|---|---|
| False positive rate | 3-5% | 0.5-1.2% | 60-75% reduction |
| Fraud detection rate | 65-75% | 90-95% | 20-30 pt increase |
| Average detection time | 24-48 hours | Real-time to 15 minutes | 95%+ reduction |
| Analyst cases per day | 25-35 | 60-90 (AI-assisted) | 2-3x improvement |
| Annual fraud losses prevented (large bank) | $300-500M | $450-750M | $150-250M incremental |
| False decline rate (legitimate transactions blocked) | 2.5-4% | 0.8-1.5% | 50-65% reduction |
The false decline reduction is particularly important because it directly impacts revenue. Every legitimate transaction that is falsely declined is a lost sale and a damaged customer relationship. For a bank processing $100 billion in annual card transactions, reducing false declines from 3% to 1% recaptures $2 billion in transaction volume.
Use Case 2: Robo-Advisory and Wealth Management
AI-driven wealth management has evolved from the simple portfolio allocation algorithms of the 2010s into sophisticated advisory platforms that combine portfolio management, tax optimization, financial planning, and behavioral coaching.
Market Size and Growth
The global robo-advisory market reached approximately $2.8 trillion in assets under management (AUM) by Q1 2026, according to Statista. This represents roughly 3.5% of total investable assets globally -- a meaningful but still small share that indicates significant room for growth.
The Three Tiers of AI Wealth Management
AI wealth management now operates across three distinct tiers:
Tier 1: Fully automated robo-advisors. Platforms like Betterment, Wealthfront, and Schwab Intelligent Portfolios offer fully automated portfolio management with AI-driven allocation, rebalancing, and tax-loss harvesting. These platforms target mass-market investors with $5,000 to $500,000 in investable assets.
Tier 2: AI-augmented human advisors. Most major wealth management firms now equip their human advisors with AI tools that generate portfolio recommendations, identify tax optimization opportunities, and predict client needs. Morgan Stanley's AI advisor platform, launched in 2025, is used by over 15,000 financial advisors managing $3.6 trillion in client assets.
Tier 3: AI-native financial planning. The newest tier uses generative AI to provide comprehensive financial planning -- not just investment management, but retirement planning, insurance analysis, estate planning, and cash flow optimization. These platforms can engage clients in natural-language conversations about their financial goals and generate personalized plans.
Comparison of Robo-Advisory Tiers
| Feature | Tier 1: Fully Automated | Tier 2: AI-Augmented Human | Tier 3: AI-Native Planning |
|---|---|---|---|
| Minimum investment | $0-$5,000 | $100,000-$1M | $1,000-$50,000 |
| Annual fee | 0.25-0.50% | 0.75-1.25% | 0.35-0.65% |
| Portfolio management | Automated | AI-recommended, human-approved | Automated with human escalation |
| Tax optimization | Basic TLH | Advanced TLH + direct indexing | Advanced TLH + tax projection |
| Financial planning | Limited | Comprehensive (human-led) | Comprehensive (AI-led) |
| Behavioral coaching | None | Human advisor | AI-driven nudges and alerts |
| Client interaction | App/web only | Human + digital | Natural language AI + human escalation |
| Target market | Mass market | High-net-worth | Mass affluent |
Performance Data
The performance question -- does AI-managed money outperform human-managed money? -- is nuanced. Pure investment return comparisons are misleading because robo-advisors and human advisors typically operate with different risk profiles, asset classes, and tax situations.
More meaningful metrics:
| Metric | Tier 1 Robo | Tier 2 AI-Augmented | Tier 3 AI-Native |
|---|---|---|---|
| Tax alpha (annual tax savings from TLH) | 0.5-1.0% | 1.0-2.0% | 0.8-1.5% |
| Rebalancing efficiency | 99%+ adherence to target allocation | 95%+ (advisor discretion) | 99%+ |
| Behavioral gap reduction | 1.5-2.0% (vs. DIY investors) | 1.5-3.0% | 1.0-2.0% (early data) |
| Client retention (annual) | 88-92% | 93-96% | 90-94% (early data) |
| Cost per client served | $15-25/year | $500-2,000/year | $40-80/year |
The cost per client metric reveals the economic logic driving AI wealth management. A human advisor serving 150 high-net-worth clients at $1,500 per client generates $225,000 in service cost. An AI-native platform serving 10,000 mass-affluent clients at $60 per client generates $600,000 in service cost but manages a comparable AUM with dramatically lower overhead.
Use Case 3: AI Credit Scoring and Underwriting
AI credit scoring represents perhaps the highest-stakes AI deployment in banking because it directly determines who gets access to credit -- and at what price. The potential for both value creation and harm is enormous.
The Limitations of Traditional Credit Scoring
Traditional credit scoring models (FICO, VantageScore) rely on a narrow set of data: payment history, credit utilization, length of credit history, credit mix, and recent inquiries. An estimated 45 million Americans are "credit invisible" -- they lack sufficient traditional credit history to generate a score. Another 20-30 million have "thin files" with limited credit history that produces unreliable scores.
AI credit scoring expands the data inputs to include:
- Cash flow patterns from bank account data (with consumer consent under open banking frameworks)
- Rent payment history
- Utility payment patterns
- Employment verification through payroll data
- Education and professional credentials
- Behavioral data from the application process itself
AI Credit Scoring Performance
| Metric | Traditional FICO | AI-Enhanced Scoring | Improvement |
|---|---|---|---|
| Default prediction accuracy (AUC) | 0.72-0.76 | 0.82-0.88 | 8-15% improvement |
| Population scored | ~85% of US adults | ~95% of US adults | 10 pt expansion |
| Approval rate increase (same risk level) | Baseline | +15-27% | More approvals, same default rate |
| Time to decision | Minutes to days | Seconds to minutes | 90%+ reduction |
| Cost per underwriting decision | $50-150 | $5-20 | 75-90% reduction |
The Fairness and Explainability Challenge
AI credit scoring's greatest promise -- expanded access to credit -- is also its greatest regulatory risk. The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit discrimination in lending on the basis of race, color, religion, national origin, sex, marital status, and age. AI models that use non-traditional data inputs can inadvertently encode these protected characteristics through proxy variables.
The proxy problem. A model might learn that certain ZIP codes correlate with higher default rates. If those ZIP codes are predominantly minority neighborhoods, the model has created a racial proxy even though race was never an explicit input. Similarly, education data, employment patterns, and even behavioral signals during the application process can correlate with protected characteristics.
Regulatory requirements for AI credit scoring:
| Requirement | Regulation | What It Means for AI |
|---|---|---|
| Adverse action notices | ECOA/Regulation B | Must provide specific reasons for credit denial; "the AI decided" is not acceptable |
| Fair lending analysis | ECOA/Fair Housing Act | Must demonstrate model does not have disparate impact on protected classes |
| Model risk management | OCC SR 11-7 | Must validate AI models, document assumptions, and monitor ongoing performance |
| Explainability | CFPB guidance | Must be able to explain how the model reached its decision in terms a consumer can understand |
| Data consent | State privacy laws + FCRA | Must have explicit consent for non-traditional data sources |
Approaches to Fair AI Credit Scoring
Banks deploying AI credit scoring in 2026 are using several approaches to address fairness:
1. Adversarial debiasing. Training a secondary model to detect whether the primary model's outputs can predict protected characteristics. If they can, the primary model is retrained to eliminate the predictive signal.
2. Fairness constraints. Embedding mathematical fairness constraints directly into the model's objective function. The model optimizes for prediction accuracy subject to the constraint that approval rates across protected groups do not differ by more than a specified threshold.
3. Interpretable model architectures. Using model architectures that are inherently more interpretable -- gradient-boosted trees, generalized additive models, or attention-based neural networks with explicit feature attribution -- rather than deep neural networks that resist explanation.
4. Continuous monitoring. Deploying real-time fairness monitoring that tracks approval rates, pricing, and default rates across demographic groups and triggers alerts when disparities exceed thresholds.
# Example: Fairness monitoring dashboard metrics
Fairness Monitoring Report - Q1 2026
=====================================
Approval Rate by Group:
White applicants: 74.2%
Black applicants: 71.8% (ratio: 0.968)
Hispanic applicants: 72.5% (ratio: 0.977)
Asian applicants: 76.1% (ratio: 1.026)
Adverse Impact Ratio Threshold: 0.80
Status: ALL GROUPS ABOVE THRESHOLD -- COMPLIANT
Average APR by Group (approved loans):
White applicants: 6.82%
Black applicants: 7.01% (delta: +0.19%)
Hispanic applicants: 6.91% (delta: +0.09%)
Asian applicants: 6.74% (delta: -0.08%)
APR Delta Threshold: 0.50%
Status: ALL GROUPS WITHIN THRESHOLD -- COMPLIANT
Use Case 4: Customer Service and Conversational AI
AI-powered customer service in banking has moved well beyond simple FAQ chatbots. In 2026, conversational AI handles complex multi-step transactions, resolves disputes, and manages relationship-level conversations.
Deployment Maturity by Bank Size
| Bank Category | Typical AI Customer Service Capability | % with Production Deployment |
|---|---|---|
| Global systemically important banks (G-SIBs) | Full conversational AI with transaction capability, escalation routing, sentiment analysis | 95%+ |
| Large regional banks ($50B+ assets) | Conversational AI for common inquiries + transactions, human escalation for complex issues | 80-85% |
| Mid-size banks ($10-50B assets) | Chatbot for FAQs + account inquiries, limited transaction capability | 60-70% |
| Community banks ($1-10B assets) | Basic chatbot or third-party solution, primarily FAQ-focused | 25-35% |
| Credit unions | Varies widely; larger CUs approaching mid-size bank capability | 20-30% |
What Advanced Banking Conversational AI Can Do
The most advanced banking AI systems handle tasks that would have required a branch visit or a 20-minute phone call just two years ago:
Account management. Balance inquiries, transaction history, statement generation, account settings changes, beneficiary management, and recurring payment setup -- all through natural-language conversation.
Dispute resolution. AI can initiate and manage transaction disputes, including gathering information from the customer, classifying the dispute type, filing with the card network, issuing provisional credits, and communicating resolution outcomes. Bank of America reported in early 2026 that its AI system handles 62% of card disputes from initiation to resolution without human intervention.
Loan applications. Conversational AI can guide customers through loan applications, collecting required information, running preliminary eligibility checks, requesting documentation, and scheduling closings. The AI handles the entire application funnel, escalating to human loan officers only for complex cases or final approval of large loans.
Financial guidance. The newest capability is AI-powered financial guidance -- not formal advice (which requires regulatory licensing) but educational conversations about budgeting, saving, and financial planning. These systems analyze the customer's transaction patterns and account balances to provide personalized, contextual guidance.
ROI Metrics for Banking Conversational AI
| Metric | Pre-AI Baseline | With Advanced AI | Impact |
|---|---|---|---|
| Cost per customer interaction | $5-12 (call center) | $0.50-1.50 (AI) | 75-90% reduction |
| First-contact resolution rate | 65-72% | 78-85% | 10-15 pt improvement |
| Average handle time | 8-12 minutes | 3-5 minutes (AI-handled) | 50-65% reduction |
| Customer satisfaction (CSAT) | 72-78% | 76-82% | 4-6 pt improvement |
| Containment rate (resolved without human) | N/A | 55-70% | New metric |
| Revenue per service interaction | $0 (cost center) | $2-8 (cross-sell/upsell) | Cost center to revenue |
The last metric is particularly significant. Traditional customer service is a pure cost center. AI-powered customer service can identify cross-sell and upsell opportunities during service interactions. When a customer asks about their savings account balance, the AI can note that their balance has grown significantly and suggest a higher-yield product. When a customer inquires about a mortgage payment, the AI can check if they qualify for refinancing at a lower rate. These natural, contextual offers convert at 3-5x the rate of traditional outbound marketing.
Use Case 5: Regulatory Compliance and Reporting
Regulatory compliance is one of the largest cost centers in banking. The total cost of compliance for US banks is estimated at $61.4 billion annually, according to the American Bankers Association. AI is attacking this cost from multiple angles.
AI Applications in Banking Compliance
| Compliance Area | AI Application | Maturity Level | Estimated Cost Reduction |
|---|---|---|---|
| Anti-money laundering (AML) | Transaction monitoring, suspicious activity detection, SAR generation | Production (widely deployed) | 30-50% |
| Know Your Customer (KYC) | Document verification, identity matching, risk assessment | Production (widely deployed) | 40-60% |
| Regulatory reporting | Automated data extraction, report generation, cross-checking | Production (growing) | 25-40% |
| Trade surveillance | Communication monitoring, pattern detection, alert generation | Production (capital markets) | 35-50% |
| Fair lending compliance | Disparate impact analysis, model validation, adverse action reasons | Early production | 20-35% |
| BSA compliance | Currency transaction report automation, FinCEN filing | Production | 30-45% |
AML: The Largest Compliance AI Opportunity
Anti-money laundering compliance is the single largest compliance cost for most banks, and it is also where AI has demonstrated the most dramatic improvements.
Traditional AML transaction monitoring systems generate enormous volumes of false alerts -- industry estimates suggest that 95-98% of AML alerts are false positives. Each false alert requires analyst investigation, which typically takes 30-90 minutes. For a large bank generating 50,000+ alerts per month, the cost of investigating false positives alone can exceed $100 million annually.
AI-powered AML systems reduce false positives by 50-70% while maintaining or improving detection rates. They achieve this by:
- Contextual transaction analysis. Rather than flagging transactions that exceed fixed thresholds, AI evaluates transactions in the context of the customer's historical behavior, peer group patterns, and account relationships.
- Network analysis. AI maps transaction networks to identify complex layering schemes that involve multiple accounts, entities, and jurisdictions. These schemes are invisible to transaction-level rules.
- Natural language processing for SAR generation. When suspicious activity is identified, AI generates draft Suspicious Activity Reports (SARs) that analysts review and file. This reduces SAR preparation time from 2-4 hours to 20-30 minutes.
The Generative AI Impact on Compliance
Generative AI is adding new capabilities to banking compliance that were not possible with traditional ML:
Regulatory change management. LLMs can monitor regulatory publications, identify changes relevant to specific banking operations, and draft impact assessments. A typical large bank must track regulatory changes across 50+ jurisdictions -- a task that previously required teams of compliance analysts.
Policy and procedure generation. When regulations change, banks must update internal policies and procedures. Generative AI can draft updated policies based on the regulatory text, the bank's existing policy framework, and industry best practices.
Examination preparation. Banks spend significant resources preparing for regulatory examinations. AI can analyze past examination findings, identify likely areas of focus, assemble supporting documentation, and draft responses to anticipated questions.
Community Banks vs. Tier-1: Different Deployment Strategies
The AI opportunity in banking is not limited to JPMorgan Chase and Goldman Sachs. Community banks and smaller institutions have distinct advantages and challenges that demand different deployment strategies.
The Community Bank Advantage
| Factor | Tier-1 Banks | Community Banks |
|---|---|---|
| Legacy infrastructure complexity | Extremely high | Moderate |
| Regulatory burden (relative to size) | High but manageable with resources | High and disproportionate to resources |
| Decision-making speed | Slow (committee-driven) | Fast (executive-driven) |
| Customer relationship depth | Transactional for most customers | Deep personal relationships |
| Data volume | Massive (advantage for custom models) | Limited (requires pre-trained models) |
| AI talent access | Strong (can hire dedicated teams) | Limited (must rely on vendors) |
| Budget for AI | $100M-$1B+ annually | $100K-$5M annually |
Practical AI Deployment Path for Community Banks
Community banks cannot build custom AI systems from scratch. Their path to AI value runs through vendor selection, cloud platform capabilities, and targeted use cases:
Phase 1: Foundation (0-6 months). Deploy cloud-based AI tools from banking technology vendors (FIS, Fiserv, Jack Henry, nCino). Focus on: AI-powered fraud detection add-ons, basic chatbot for customer service, and automated document processing for loan origination.
Phase 2: Expansion (6-18 months). Add AI credit scoring for expanded lending (using vendors like Upstart, Zest AI, or Scienaptic), deploy AI-assisted compliance monitoring for AML and BSA, and implement AI-powered marketing personalization.
Phase 3: Differentiation (18-36 months). Use AI to create competitive advantages that leverage community banks' inherent strengths: deeper customer relationships, local market knowledge, and faster decision-making. Examples include AI-powered small business advisory services, personalized financial wellness programs, and predictive community lending.
# Example: Community bank AI implementation budget
Annual AI Investment Plan - Community Bank ($2B assets)
======================================================
Phase 1 - Foundation:
Fraud detection (vendor add-on): $75,000/year
Customer service chatbot: $40,000/year
Loan document processing: $55,000/year
Subtotal: $170,000/year
Phase 2 - Expansion:
AI credit scoring platform: $120,000/year
AML/BSA compliance AI: $95,000/year
Marketing personalization: $60,000/year
Subtotal: $275,000/year
Phase 3 - Differentiation:
Small business advisory AI: $85,000/year
Financial wellness platform: $70,000/year
Predictive lending analytics: $110,000/year
Subtotal: $265,000/year
Total annual AI investment (at maturity): $710,000/year
Expected annual value generated: $2.5-4.5M/year
ROI multiple: 3.5-6.3x
Cybersecurity and Model Risk Management
Deploying AI in banking introduces new attack surfaces and risk categories that must be managed alongside the traditional risks of banking technology.
AI-Specific Cybersecurity Threats
| Threat | Description | Impact | Mitigation |
|---|---|---|---|
| Adversarial attacks | Manipulating inputs to cause AI models to make incorrect decisions | Fraudulent transactions approved, legitimate ones declined | Adversarial training, input validation, ensemble models |
| Model extraction | Attackers querying AI APIs to reverse-engineer model logic | Competitors or criminals learn to evade fraud detection | Rate limiting, query pattern monitoring, model watermarking |
| Data poisoning | Injecting malicious data into training datasets | Model performance degradation, biased outcomes | Data provenance tracking, anomaly detection in training data |
| Prompt injection | Manipulating AI chatbots to bypass security controls | Unauthorized transactions, information disclosure | Input sanitization, guardrails, output filtering |
| Model drift exploitation | Waiting for model accuracy to degrade before attacking | Fraud detection gaps, compliance failures | Continuous monitoring, automated retraining triggers |
Model Risk Management Framework
The OCC's SR 11-7 guidance on model risk management applies to AI models with even greater force than to traditional statistical models. Banks deploying AI must implement:
1. Model inventory and classification. Every AI model must be inventoried with its purpose, risk tier, data inputs, performance metrics, and responsible owner. Risk tiers determine the level of validation and monitoring required.
2. Independent model validation. AI models used for credit decisions, fraud detection, or compliance must be validated by an independent team -- not the team that built the model. Validation includes testing on holdout data, stress testing under adverse conditions, and fairness analysis.
3. Ongoing monitoring. Production AI models must be continuously monitored for accuracy degradation, data drift, and fairness drift. Automated alerts trigger when performance falls below acceptable thresholds.
4. Model change management. Every change to a production AI model -- retraining, feature changes, architecture changes -- must go through a formal change management process that includes validation, testing, and approval by model risk management.
5. Documentation and audit trail. Regulators expect comprehensive documentation of AI model development, validation, deployment, and monitoring. This documentation must be sufficient for an examiner to understand how the model works, why it was built this way, and how it is performing.
The Path Forward: 2026 to 2028
The banking AI landscape is evolving rapidly. Based on current deployment patterns and technology trends, several developments are likely over the next 24 months:
AI-native banks. The first fully AI-native banks -- institutions built from the ground up with AI at the core of every process -- will emerge. These institutions will not retrofit AI onto legacy systems; they will design every workflow, decision, and customer interaction around AI capabilities from day one.
Regulatory clarity. Federal banking regulators (OCC, FDIC, Fed) are expected to issue comprehensive AI guidance by late 2026 or early 2027, providing clearer rules for AI model governance, fairness testing, and explainability requirements.
Consolidation of AI vendors. The current fragmented market of banking AI vendors will consolidate as the major banking technology platforms (FIS, Fiserv, Temenos) acquire AI-native competitors and integrate their capabilities into core banking platforms.
Open banking and AI convergence. Open banking frameworks (Consumer Financial Protection Bureau's Section 1033 rule in the US, PSD2/PSD3 in Europe) will provide AI systems with richer data inputs, improving credit scoring, personalization, and financial planning capabilities.
Conclusion
The $2 trillion banking AI opportunity is real, but it is not evenly distributed. The five use cases examined here -- fraud detection, wealth management, credit scoring, customer service, and regulatory compliance -- represent the clearest ROI paths for financial institutions of all sizes. The institutions capturing the most value are those that moved beyond pilots into production deployment, invested in model risk management and fairness infrastructure, and chose use cases based on measurable business outcomes rather than technological novelty.
For banks still in the pilot phase, the strategic imperative is clear: choose one or two high-ROI use cases, deploy them into production with proper governance, measure the results rigorously, and use those results to build organizational confidence for broader AI deployment. The $2 trillion opportunity is not captured all at once. It is captured one well-executed use case at a time.
Enjoyed this article? Share it with others.