ŁѺ𝔚 Σ𐌍ŦȐΘℙΨ
Low Entropy Works explores how to reduce disorder in AI, products, and decision frameworks. The focus: clarity, ethical structure, and intelligence that behaves with intention.
What you'll find here:
- ① Product & AI strategy that scales.
 - ② Responsible AI, governance and practical EU AI Act perspective.
 - ③ Real product case studies: decisions, trade-offs, impact.
 - ④ Applied data science and ML experiments.
 - ⑤ Data-informed product growth & experimentation.
 - ⑥ Frameworks for low-entropy decision making.
 - ⑦ Product execution & delivery leadership.
 - ⑧ Fractional product leadership.
 
A human behind:
Tatiana Bondarenko — technical Product Manager specialising in AI, data, and system clarity, building ethical, intelligent products.
Intersections:
          Noise — Signal
          Data — Meaning
          Vision — Execution
          Systems — People
          Experimentation — Ethics
          Velocity — Discipline
        
What I Do:
          Product & Strategy
          Create 0→1 products in AI, SaaS, and platform environments
          Shape product vision and alignment
          Design scalable AI-native systems
          Compliance-aware product thinking (SOC2, ISO, EU AI Act)
          Execution & Leadership
          Lead cross-functional delivery
          Drive growth through experimentation
          Coordinate remote teams in complex environments
          Technical & Data
          Guide decisions with analytics
          Hands-on with Python, SQL, Pandas, scikit-learn
          Integrate APIs and design workflows
          Collaborate on CI/CD, containerization, deployment
        
Now Exploring
- 2025-10Claim Fraud Detection (ML): Framing
 - 2025-10Claim Fraud Detection (ML): Data Audit
 - 2025-10Claim Fraud Detection (ML): Exploratory Analysis
 - 2025-10Claim Fraud Detection (ML): Bias Surface Inspection
 - 2025-10Claim Fraud Detection (ML): Feature Engineering
 - 2025-10Claim Fraud Detection (ML): Leakage Prevention
 - 2025-10Claim Fraud Detection (ML): Baseline Models
 - 2025-10Claim Fraud Detection (ML): Benchmarking
 - 2025-11Claim Fraud Detection (ML): Optimization
 - 2025-11Claim Fraud Detection (ML): Threshold Tuning
 - 2025-11Claim Fraud Detection (ML): Boosting Models
 - 2025-12Claim Fraud Detection (ML): Interpretability (SHAP)
 - 2025-12Claim Fraud Detection (ML): Bias & Fairness Check
 - 2025-12Claim Fraud Detection (ML): Pipeline Consolidation
 - 2025-12Claim Fraud Detection (ML): Streamlit Demo
 
Latest Work
Deep Dives
- 2025-01Ethical AI
 - 2024-11Case Studies
 - 2024-10Systems Frameworks
 - 2024-10Signal vs Noise Taxonomy
 - 2024-09AI Failure Modes
 - 2024-09Cognitive Load in Product Decisions
 - 2024-08Architectures for 0→1 Platforms
 - 2024-08Boundary Objects in Cross-Functional Teams
 - 2024-07Alignment Fractures in Scaling Organizations
 - 2024-07Bias Gradients in Human-AI Interaction
 
Recommended Signals
Curated papers, posts, and research worth exploring.
- OpenAI Speculative Execution for Agents Parallelizing agent to reduce latency
 - DeepMind Chain-of-Thought Safety Risks Long reasoning chains leak private data
 - Amplitude Growth Feedback Loops Systems thinking for product expansion
 - NYU Model Collapse in Fine-Tuning Explains why models degrade over time
 - Meta FAIR Multimodal Evaluation Gaps Benchmarks behind generative capability