Skip to main content

Use Case Examples

Real-world examples of how to use the NeoSpace platform for different business scenarios.

Use Case 1: Fraud Detection Model

Complete workflow for building a fraud detection model.

Scenario

Build a model to detect fraudulent transactions in real-time for a financial institution.

Workflow

Step 1: Connect Transaction Data

  • Create S3 connector to transaction data lake
  • Configure access to transaction data
  • Validate connection

Step 2: Create Transaction Dataset

  • Create EVENT_BASED dataset for transaction data
  • Select features: transaction amount, merchant, location, time, etc.
  • Select target: fraud indicator
  • Configure 80/20 training/validation split
  • Process dataset

Step 3: Train Fraud Detection Model

  • Create training job
  • Use transaction dataset
  • Configure NeoLDM architecture
  • Train model with appropriate hyperparameters
  • Monitor training and save checkpoints

Step 4: Evaluate Model

  • Create benchmark for fraud detection
  • Configure classification metrics (Precision, Recall, F1, ROC AUC)
  • Evaluate checkpoints against benchmark
  • Review results in leaderboard

Step 5: Select Best Model

  • Compare models in leaderboard
  • Focus on Precision and Recall (minimize false positives and false negatives)
  • Select best checkpoint

Step 6: Deploy to Production

  • Deploy model to inference server
  • Configure for real-time predictions
  • Monitor latency and accuracy
  • Scale based on transaction volume

Key Considerations

  • Low Latency: Real-time predictions require low latency
  • High Precision: Minimize false positives
  • High Recall: Minimize false negatives
  • Scalability: Handle high transaction volumes

Use Case 2: Credit Scoring Model

Complete workflow for building a credit scoring model.

Scenario

Build a model to predict credit risk for loan applications.

Workflow

Step 1: Connect Customer Data

  • Create connectors to customer databases
  • Connect to credit history data
  • Integrate multiple data sources

Step 2: Create Customer Dataset

  • Create FEATURE_BASED dataset
  • Combine data from multiple sources
  • Select features: demographics, income, credit history, etc.
  • Select target: default probability
  • Configure data partitions

Step 3: Train Credit Scoring Model

  • Create training job
  • Use customer dataset
  • Configure model for regression task
  • Train model with appropriate architecture
  • Monitor training progress

Step 4: Evaluate Model

  • Create benchmark for credit scoring
  • Configure regression metrics (MSE, MAE, R²)
  • Also use classification metrics (KS, Gini) for risk ranking
  • Evaluate checkpoints

Step 5: Compare and Select

  • Compare models in leaderboard
  • Focus on KS and Gini metrics (common in credit scoring)
  • Select best model

Step 6: Deploy Model

  • Deploy to inference server
  • Integrate with loan application system
  • Serve real-time credit scores
  • Monitor model performance

Key Considerations

  • Regulatory Compliance: Ensure model meets regulatory requirements
  • Explainability: May need model explanations
  • Fairness: Ensure fair treatment across demographics
  • Performance: Balance accuracy with interpretability

Use Case 3: Personalized Recommendations

Complete workflow for building a recommendation system.

Scenario

Build a model to provide personalized product recommendations for e-commerce.

Workflow

Step 1: Connect Customer and Product Data

  • Create connectors to customer behavior data
  • Connect to product catalog
  • Integrate purchase history

Step 2: Create Recommendation Dataset

  • Create EVENT_BASED dataset for user interactions
  • Include: user actions, product features, context
  • Select target: purchase probability or rating
  • Configure temporal splits (train on past, validate on recent)

Step 3: Train Recommendation Model

  • Create training job
  • Use interaction dataset
  • Configure for ranking or classification
  • Train model with appropriate architecture
  • Monitor training

Step 4: Evaluate Model

  • Create benchmark for recommendations
  • Configure ranking metrics or classification metrics
  • Evaluate on held-out test set
  • Review results

Step 5: Select Best Model

  • Compare models in leaderboard
  • Focus on metrics relevant to business (conversion, revenue)
  • Select best model

Step 6: Deploy Model

  • Deploy to inference server
  • Integrate with recommendation API
  • Serve real-time recommendations
  • A/B test different models

Key Considerations

  • Real-Time: Recommendations need to be fast
  • Personalization: Model should adapt to individual users
  • Diversity: Balance relevance with diversity
  • Scalability: Handle large user and product catalogs

Use Case 4: Churn Prediction

Complete workflow for building a customer churn prediction model.

Scenario

Build a model to predict which customers are likely to churn.

Workflow

Step 1: Connect Customer Data

  • Create connectors to customer databases
  • Connect to usage data
  • Integrate support and engagement data

Step 2: Create Churn Dataset

  • Create FEATURE_BASED dataset
  • Include: customer features, usage patterns, engagement metrics
  • Select target: churn indicator
  • Configure appropriate splits

Step 3: Train Churn Model

  • Create training job
  • Use churn dataset
  • Configure for classification
  • Train model
  • Monitor training

Step 4: Evaluate Model

  • Create benchmark for churn prediction
  • Configure classification metrics
  • Focus on Recall (identify churners) and Precision (avoid false alarms)
  • Evaluate checkpoints

Step 5: Select Best Model

  • Compare models in leaderboard
  • Balance Precision and Recall based on business needs
  • Select best model

Step 6: Deploy Model

  • Deploy to inference server
  • Integrate with customer management system
  • Generate churn scores
  • Trigger retention campaigns

Key Considerations

  • Early Detection: Predict churn early enough to intervene
  • Actionability: Scores should trigger actionable interventions
  • Cost-Benefit: Balance model performance with intervention costs
  • Monitoring: Track model performance and churn rates

Common Patterns

Common patterns across use cases:

Data Integration:

  • Multiple data sources
  • Data quality validation
  • Feature engineering
  • Temporal considerations

Model Development:

  • Iterative training
  • Multiple experiments
  • Checkpoint management
  • Performance tracking

Evaluation:

  • Multiple benchmarks
  • Comprehensive metrics
  • Fair comparison
  • Trend analysis

Deployment:

  • Gradual rollout
  • Performance monitoring
  • Scaling based on demand
  • Continuous improvement

Next Steps