Every golfer has stood over a bad lie and guessed.
GLA ends the guesswork. Point. Analyze. Execute.
Computer vision reads your lie. A Claude-powered caddie engine synthesizes
lie class, distance, wind, and your skill level into a single,
actionable recommendation.
And you get caddie-grade advice — adapted to your skill level,
your distance, and the conditions in front of you.
GPS yardage. Shot tracking. Swing analytics. The golf app market is saturated with tools that measure everything except the shot in front of you.
The lie is the variable every golfer deals with on every shot off the fairway — and every golfer has been solving it with instinct alone. Until now.
GLA layers lie intelligence on top of the inputs other apps already handle: distance, wind, club selection. The result is the complete picture — the one a real caddie gives you at the bag.
And it doesn't matter if you're a 28 handicapper learning how to escape a buried lie, or a scratch player fine-tuning carry distance from a tight fairway divot. GLA calibrates its advice to you.
Three systems working in sequence to turn a difficult lie into a confident, executable plan.
MobileNetV3Small TFLite model classifies 12 distinct lie types — from tight fairway to plugged bunker — with confidence scoring per detection. Rough-medium was deliberately excluded; GLA detects rough and deep rough only, where visual differentiation is reliable. Trained on a proprietary dataset of real-world turf interaction images.
Claude Sonnet synthesizes lie class, confidence tier, GPS distance, wind conditions, and player skill level into a single executable caddie recommendation. Ball position, weight distribution, club selection, and shot shape — all derived from what the turf is actually doing to the ball.
A dedicated edge device runs the TrainClaw pipeline — autonomous dataset validation, Vertex AI training jobs, and CI/CD deployment to Cloud Run. Every beta image that enters the system is validated, labeled, and fed back into the next training run without manual intervention.
GLA synthesizes up to six inputs to generate each piece of advice. Lie detection is the core capability no other app can replicate — everything else makes it complete.
The lie is the foundation. Every other input — distance, wind, club selection — only becomes useful once you know what the turf is actually doing to the ball. GLA establishes that ground truth first, then builds the recommendation around it.
Each lie class maps to a specific setup prescription: ball position in stance and weight distribution. A neutral fairway lie means ball center, 50/50 weight. A downhill lie shifts to 60% front foot. A buried plug in the bunker goes ball back, 70% front foot. GLA outputs the exact split — not vague guidance.
Lie severity cascades through the entire advice chain. A buried lie in deep rough doesn't just change club selection — it changes swing speed, ball position, landing zone expectation, and risk tolerance. GLA models all of it.
Distance anchors club selection. In scoring range, GLA targets the pin. In layup range, it finds the optimal position for the next shot. Carry expectations adjust based on lie severity before any club recommendation is made.
Skill tier changes the voice, not the intelligence. The same 12-class detection, the same physics model, the same inputs — presented at the depth the player can actually use on the course.
Weight distribution isn't a generic tip — it's dictated by the physics of each surface. GLA outputs the exact prescription for your lie. No guessing where to stand.
Uneven lies change six variables simultaneously — weight, grip, shoulder angle, aim line, ball position, and ball flight shape. GLA maps all six for each slope condition. No guessing, no half-measures.
"I am a Defense & Aerospace Program Manager by trade. I spend my days bridging technical innovation with federal compliance — executing high-stakes hardware/software programs for the DoD. I built GLA because I wanted the same level of data-driven situational awareness I use at work, applied to the most difficult shots in golf."
The discipline that ships flight simulators on compressed timelines for the U.S. Navy — risk identification, systems thinking, zero-defect delivery — is the same discipline behind every architectural decision in GLA.
This is not a weekend app. It is a seriously engineered product built by someone who has spent 20+ years delivering complex systems under real-world pressure, now applying that rigor to a problem every golfer faces on every round.
Live specs as of validation Phase 1. Updated with each milestone release.
GLA's lie detection model improves continuously as beta users capture images from real courses, real conditions, real lighting. The more lies the model sees, the more accurately it reads the next one.
This is the same data flywheel that powers every world-class computer vision system — real-world image volume beats lab data every time. Beta users aren't just testing GLA. They're training it.
Images are retained anonymously — no personal data is ever attached. You consent explicitly at signup. You can withdraw consent at any time. Your data is never sold or shared.
Most AI apps are static — trained once, deployed, forgotten. GLA runs a fully autonomous retraining pipeline on dedicated edge hardware. Every night, DataClaw validates the training dataset. Every beta image that passes quality checks is labeled and staged. When F1 thresholds are met, TrainClaw submits a new Vertex AI training job, evaluates the result, and deploys the winning model to Cloud Run automatically.
This is the same discipline applied to Defense & Aerospace program delivery — continuous validation, zero-defect criteria, automated gatekeeping — applied to a 12-class golf lie classifier. Beta users don't just test the product. They feed the pipeline.
Validation Phase 1 is active. Beta is limited to 500 users across all skill levels — from first-season beginners to scratch players. Every lie image you take feeds the training dataset, making GLA more accurate for every golfer who comes after you. You're not just using the product. You're building it.
No spam. Access notification only. Unsubscribe anytime.
Lie images you capture are retained and used to improve GLA's classification model. More images = more accurate lie detection for every user. You consent to this at signup.
No personal data is attached to training images. Your name, account, location, and device ID are never linked to the image data sent for training.
GLA does not sell, license, or share your image data with third parties — ever. Training data is used exclusively to improve GLA's own lie detection model.
Used solely for beta access notification. No marketing lists, no third-party sharing. Unsubscribe at any time.