logoalt Hacker News

xtractoyesterday at 5:28 PM0 repliesview on HN

Location: UTC-6 (Americas) Remote: Yes, strongly preferred. Willing to relocate: Only for the absolutely right opportunity.

Technologies: Python, TypeScript (Nest.js for API, React for mobile or web front), Ruby, AWS (Architect, DevOps, AI/ML), Terraform, Kubernetes, Docker, PostgreSQL, Cassandra, LLMs, Scikit-learn(all MPL, SNS, pd, Data Science stack), Solidity, FinTech/Blockchain. (I've worked professionally with too many to write here and not give the feeling of "wall of text".)

Résumé/CV: baqueiro.com/static/baqueiro2025.pdf

Email: (in my resume)

Compensation Expectation: $140k and $170k USD gross yearly

Hello Hacker News! I'm a CompSci PhD (AI Multi-Agent systems... doing Agents' research back in 2004, before they were cool haha) and a passionate technologist with a warm spot for building high-impact products and teams.

For over 20 years, I've loved being a hands-on leader in FinTech, building everything from scalable crypto trading desks and AI/ML systems for fraud detection, to migrating entire platforms from monoliths to robust, cloud-native microservices. I thrive on aligning engineering with business goals and tackling complex distributed systems challenges.

Zero-to-One Success: Built foundational tech/AI/ML at a Fintech, enabling the company's Series A. Executive & Scaling Experience: CTO of a crypto trading desk; Head of Engineering, Product or AI/ML at high-growth FinTechs.

I'm actively looking for a remote role, or for an absolute amazing opportunity, I would relocate. I am most comfortable as first/founding engineer, also have been head-of-tec/cto building startup's technology teams. Right now I am most interested in doing AI/LLM work, like really pushing the status quo of both the Engineering and AI/ML. (I would love to get into something like SSI).

A couple of interesting personal projects I'm working on now:

* I trained a couple of Transformers+MoE models to forecast the price of a REIT-like time series in Mexico (called FIBRA in Mexico): They observe heavy cyclical behaviours every quarter that can be isolated and used as information for forecasting. Good results, albeit not 'life changing' yet.

* I'm working on a "live weight adjustment" or "live training" framework for Transformers/MoE models to be able to use new information to modify the weights of the network slightly, without having to re-train the whole model. Kind of like LoRA/QLoRA, but making something usable for a live-running model.

* I am also experimenting with non-chat uses of text/token LLMs. LLMs are text generation machines, which leaving them "running" independently would be akin to our (people's) train of thought that starts since we are born. Having an LLM do that and finding ways give it "perceptors" for inputs (like IRQs in computers) for environment communication could yield interesting results.