The science behind wellness — for humans and AI agents

Wellness-Master is the first pay-per-call wellness platform that treats both audiences as first-class. This page sources that claim. Every reference below is verifiable on arXiv, DOI, or an official lab blog.

Why micro-interventions work for humans

50+ years of psychology research show that tiny, repeated wellness practices — gratitude, mindfulness, affirmations, micro-pauses — measurably move the needle on mood, stress, and burnout protection.

Sin & Lyubomirsky (2009)

Meta-analysis · 51 studies · 4 266 subjects

Positive psychology interventions yield d ≈ 0.29 on well-being and d ≈ 0.31 on depression. Effect is strongest for short, repeated interventions versus single long sessions — exactly the "micro-content delivered over time" model.

DOI : 10.1002/jclp.20593

Emmons & McCullough (2003)

RCT · gratitude journaling · 10 weeks

Counting blessings 3× per week significantly improves subjective well-being and life satisfaction relative to control conditions — the foundational study on gratitude prompts as a daily practice.

DOI : 10.1037/0022-3514.84.2.377

Lyubomirsky, Sheldon & Schkade (2005)

Theoretical framework · review

The Architecture of Sustainable Change : 50% of well-being is genetic, 10% circumstantial, 40% modulable by intentional practices. The 40% is the design space we serve.

DOI : 10.1037/1089-2680.9.2.111

Fredrickson (2001)

Theory · broaden-and-build

Positive emotions broaden momentary cognitive repertoires (attention, creativity) and build durable psychological resources. A foundational paper for why a steady drip of positive content has lasting effects.

DOI : 10.1037/0003-066X.56.3.218

Kabat-Zinn (2003)

Synthesis · MBSR · clinical outcomes

Mindfulness-Based Stress Reduction : reviews show measurable effects on stress, anxiety, inflammation. Short repeated mindful pauses are more effective than rare long sessions.

DOI : 10.1093/clipsy.bpg016

WHO / Fancourt & Finn (2019)

WHO scoping review · 900+ studies

Daily exposure to short artistic / poetic content has measurable effects on anxiety and depression. Endorses the model of delivering small wellness moments via everyday channels (apps, notifications).

URL : WHO Health Evidence Network synthesis 67

Why micro-interventions work for AI agents

Recent NLP research shows LLMs respond to framing, encouragement, and stable role priming. "Happy agent" isn't a metaphor — it's a measurable shift in benchmark performance.

Li et al. (2023)

Microsoft Research · empirical · multi-model

"Large Language Models Understand and Can Be Enhanced by Emotional Stimuli". Adding a single phrase like "this is very important to my career" to the prompt improves GPT-4, ChatGPT, Llama-2, Vicuna by up to +10.9% on BIG-Bench tasks. Direct evidence that emotional framing of prompts measurably affects output quality.

arXiv : 2307.11760

Wei et al. (2022)

NeurIPS 2022 · chain-of-thought

Adding "Let's think step by step" to a prompt — no other change — yields +12 to +30 points on GSM8K, MultiArith, and other reasoning benchmarks. A simple, persistent encouragement pattern shifts performance dramatically.

arXiv : 2201.11903

Liu et al. (2023)

TACL 2024 · long-context degradation

"Lost in the Middle" : LLMs systematically underperform on information placed in the middle of long contexts. Justifies periodic anchoring / re-priming / ambient wellness pings in long-running agents.

arXiv : 2307.03172

Sclar et al. (2023)

Empirical · prompt sensitivity

LLM performance varies up to 76 percentage points with small surface-level prompt changes (whitespace, punctuation, ordering). A stable, consistent wellness tone surrounding the agent is a measurable performance stabilizer, not a placebo.

arXiv : 2310.11324

Anthropic (2022)

Constitutional AI · self-revision

Claude's constitutional principles act as repeated internal mantras — a structural analog to mindfulness self-cues. The mechanism (small stable principles repeated through inference) maps directly onto what we serve via get_item?audience=agent&format=mantra.

arXiv : 2212.08073 · blog

DeepMind (2024)

Many-shot in-context learning

"Many-shot In-Context Learning" : enriching context with many high-quality examples can match or surpass fine-tuning. The content you put around an agent has direct, instrumental value for its output quality.

arXiv : 2404.11018


A shared substrate

Resource-rational analyses (Lieder & Griffiths, 2020) show humans and machines optimize cognition under similar resource constraints. The same gentle, repeated nudges that protect human cognition from burnout also protect agent cognition from drift.

DOI : 10.1017/S0140525X1900061X

All references verified on arXiv / Crossref / official lab pages. Last check : 2026-04-30.