J·C·Fjelstul
Consulting LLC

ML Systems

From data pipeline
to validated system.

Custom NLP models, natural language search systems, and production APIs. Built on your data, for your domain. End-to-end delivery without the overhead of a multi-person team. Carefully validated so you avoid expensive mistakes.

ML Systems

Model Training

Custom NLP models built on your domain and your data.

The Challenge

General-purpose NLP APIs are optimized for average performance across many domains. When your use case involves specialized terminology or domain-specific language patterns, off-the-shelf models can underperform — sometimes badly. But the problem usually isn't just the model. It's that the labels are defined too loosely, the evaluation metric doesn't reflect what matters in production, or the training data doesn't represent the cases your system will actually encounter. Getting those things right is upstream of architecture.

The Solution

I train custom NLP models — text classification, named entity recognition, information extraction, span detection — trained and validated on your specific domain and use cases. I cover the full development lifecycle: label definition and data assessment, preprocessing, architecture selection, training and tuning, error analysis and edge case validation, and documentation. You receive a production-ready PyTorch model with a complete model card, preprocessing code, inference pipeline, and 30 days of post-delivery support.

Deliverables

Trained model weights and configuration files (PyTorch)

Data preprocessing and inference pipeline code

Model card documenting training data, architecture, performance metrics, and known limitations

Performance evaluation report with per-class metrics and error analysis

Integration guidance and deployment recommendations

30 days of post-delivery virtual support and bug fixes

Timeline

3 – 5 weeks: data assessment and preprocessing pipeline, model training and tuning, optimization, documentation, and handoff.

Typical Investment

$20,000 – $55,000

Task complexity is the primary driver — sequence classification sits toward the lower end; span classification, multi-label, or custom architectures toward the upper. Data quality and volume matter significantly: clean, well-labeled data reduces iteration time. Extensive custom preprocessing or particularly noisy data increases it.

Payment schedule
30%upon contract signing
40%upon model training completion and performance review
30%upon final delivery
ML Systems

Recommender Systems

Semantic recommendations at scale.

The Challenge

Keyword-based search and rule-based recommendation systems have a fundamental ceiling: they can only surface what users already know how to ask for. Organizations with large document corpora, product catalogs, or content libraries need systems that understand meaning — that can recommend a relevant document based on semantic similarity, or surface related content a user didn't know existed.

The Solution

I design and build semantic recommendation systems using embedding models and efficient vector retrieval. I encode your corpus with a domain-appropriate embedding model, index it in vector database optimized for your scale and latency requirements, and use cross-encoder re-ranking to improve accuracy. I use advanced techniques, including custom bi-encoders, HyDE, and graph-augmented retrieval to improve performance. You get a production-ready API that returns semantically relevant recommendations with filtering, personalization options, and quick response times, along with a monitoring dashboard tracking search quality and system performance over time.

Deliverables

Production recommendation API with filtering and personalization options

Vector database with indexed embeddings, optimized for your scale

Cross-encoder re-ranking model for precision improvement

Hybrid search combining semantic and keyword matching

Monitoring and analytics dashboard for search quality and performance

Docker containerization and deployment configuration

Technical documentation and API integration guide

Performance benchmark report (relevance metrics, latency, throughput)

Training session on system operation and index management

30 days of post-deployment support

Timeline

6 – 10 weeks: data preprocessing and embedding generation, vector database setup and optimization, cross-encoder development, API development, integration and performance tuning, documentation and deployment.

Typical Investment

$30,000 – $100,000

A single cross-encoder system over a moderate corpus sits toward the lower end. Hybrid systems requiring advanced techniques (HyDE, graph-augmented retrieval), a domain-specific bi-encoder, or strict latency requirements sit toward the upper end. Corpus size, query volume, and index complexity are key drivers.

Payment schedule
30%upon contract signing
20%upon embedding generation and vector database setup
20%upon cross-encoder completion
30%upon production deployment and integration
ML Systems

Domain Adaptation

Language models that understand your domain's vocabulary and patterns.

The Challenge

In specialized fields — legal, clinical, financial, scientific — language works differently. Standard BERT-family models are pretrained on general web text, which means they've never encountered your terminology used in context, your field's rhetorical conventions, or the specific ways concepts relate in your domain. For business-critical applications where accuracy matters, this gap produces models that perform well on general benchmarks and poorly on your actual data.

The Solution

I adapt a BERT-family transformer to your specific domain through continued pretraining on your domain corpus, followed by fine-tuning for your target tasks. I handle corpus collection and preprocessing, custom vocabulary adaptation for domain terminology, pretraining validation, and the development of multiple task-specific models on top of the adapted base. The result is a domain-aware language model that understands your field's language, paired with an automated retraining pipeline so the model can be updated as your corpus grows.

Deliverables

Domain-adapted base model weights and configuration

Custom tokenizer to capture domain terminology

Task-specific fine-tuned models built on adapted base

Automated retraining pipeline with performance evaluation

Testing framework for comparing model versions

Architecture documentation covering design decisions and tradeoffs

Training materials for your team

30 days of post-deployment support with priority response

Timeline

6 – 10 weeks: domain corpus collection and preprocessing, tokenization and vocabulary adaptation, domain pretraining and validation, task-specific model development, documentation and handoff.

Typical Investment

$30,000 – $60,000

Corpus size and domain specificity are the primary drivers, along with compute requirements for pretraining. GPU compute costs for pretraining can be significant and are billed separately. Infrastructure sophistication and the number of task-specific models also affect scope.

Payment schedule
30%upon contract signing
20%upon domain pretraining completion and validation
20%upon task-specific models completion
30%upon production deployment and integration
Ready to build?

From Problem Framing to Production.

Whether you need a domain-adapted text classification model, or an end-to-end recommender system with RAG, I help you ask the right questions, frame your business problem, and build cutting-edge AI/ML solutions.

Schedule a consultation