Neural networks are increasingly deployed in safety- and mission-critical pipelines, yet many verification and analysis results are produced outside the programming environment that defines and runs the model. This separation creates a semantic gap between the executed network and the analyzed artifact, so guarantees can hinge on implicit conventions (operator semantics, tensor layouts, preprocessing, and floating-point corner cases). We introduce TorchLean, a framework in the formal theorem prover Lean that treats learning models as first-class mathematical objects with a single, precise semantics shared by execution and verification. To our knowledge, it is the first to interconnect these components over a shared intermediate representation (IR). TorchLean unifies: (1) a PyTorch-style verified API for defining models and training loops with eager execution and a compiled mode that lowers programs to a shared op-tagged computation-graph IR; (2) float32 semantics, via an executable IEEE-754 binary32 kernel (IEEE32Exec) and proof-relevant rounding models that make numerical assumptions and trust boundaries explicit; and (3) verification, via native IBP and CROWN/LiRPA-style bound propagation with certificate checking. We validate TorchLean end-to-end on three concrete use cases: certified robustness certificates for classifiers, physics-informed residual/derivative bounds for PINN-style scientific models, and control-oriented safety/stability checking for a neural controller, alongside mechanized theory results including a universal approximation theorem. Together, these results position TorchLean as a semantics-first infrastructure toward fully formal, end-to-end verification of learning-enabled systems.