Enhancement workspace guide
This guide describes the enhancement workspace in the NeoSpace NeoData area—the screen you open after you create an enhancement or choose Open workspace from the list. Labels in the product may vary slightly by release, but the concepts below stay the same.
Before you open the workspace
An enhancement is created from NeoData → Data enhancement → New enhancement (or equivalent). You provide:
- A name and optional description so your team recognizes the scope of the workspace.
- One or more source (raw) datasets that are already Ready. If the list is empty, create and preprocess datasets under Datasets and Connectors first.
After creation, NeoSpace opens the workspace for that enhancement so you can continue configuration.
Main areas of the workspace
Source datasets
The Source datasets section lists raw datasets attached to this enhancement. From here you can:
- See which tables feed the workspace.
- Add another ready dataset when your model needs additional sources.
- Open schema preview where available to inspect column names and types before you define facts and predictions.
Keeping sources accurate matters: the feature schema and statistics are derived from the datasets you attach.
Feature schema and statistics
The workspace surfaces a feature table (or equivalent) with column-level information—such as uniqueness, missing values, and summary statistics—so you can validate that the data matches expectations before modeling.
Use this area to understand distributions and data quality at a glance. When something looks off, return to dataset preparation or connectors rather than forcing predictions in the pipeline.
Facts and predictions
NeoSpace distinguishes:
- Facts — columns you treat as observed in the real world (inputs or ground truth you are not asking the model to invent).
- Predictions — outputs you expect from model inference (for example a probability, score, or class).
Marking this distinction correctly keeps training targets aligned with deployment: the platform uses it to wire training, evaluation, and serving consistently.
You can search and filter long column lists to work quickly on wide schemas.
Prediction pipeline
The prediction pipeline defines execution order for prediction outputs when one output depends on another (for example a risk score that feeds a second stage). You arrange outputs on a canvas or ordered list—depending on the version of the UI—so that the sequence matches how NeoSpace should run inference in production.
Typical workflow:
- Define prediction outputs in the facts and predictions area.
- Add them to the pipeline in the order that reflects business logic.
- Validate the pipeline when the product prompts you (for example if at least one prediction must be present).
Some versions include suggestions to propose an initial ordering; you should always review order against your organization’s policies.
Assistant and transformation tooling
The workspace includes panels that help you iterate on configuration and review transformation-related activity. Use them to stay aligned with schema changes, follow platform guidance shown in context, and track what changed while you work. Operational detail depends on your NeoSpace version; your administrator can confirm which options are enabled.
Logs
A Logs section captures workspace activity that helps you debug preprocessing, saves, and related actions. If something fails, start here before opening a support case with NeoSpace.
Saving, reprocessing, and lifecycle
- Save persists your enhancement definition when you have confirmed changes.
- Reprocess (when shown) triggers a new processing run for the data model after material changes—expect this to take time on large datasets.
Enhancements move through status values such as draft, processing, ready, or failed (exact labels may vary). Failed states usually require checking dataset readiness, pipeline validity, or logs.
Screenshots (product captures)
Place files under static/img/data-enhancement/.