Glossary
A user assigned to a role with reviewer privileges can approve another user’s changes, rendering them visible to other users who have access to the artifact. An approved operation involving multiple artifacts is atomic.
An object managed by TIBCO ModelOps, similar to a file within a file system. An artifact can be created through user interaction with TIBCO ModelOps or can be created using an external tool and imported into TIBCO ModelOps. Examples are scoring pipelines, scoring flows, and models.
Before working on a set of projects and artifacts, retrieve a local, writable working copy of the items to be worked on by performing a checkout. TIBCO ModelOps supports single and bulk file checkouts.
After making changes, the commit operation submits the artifact for storage approval into the TIBCO ModelOps repository. This action does not make the changes visible to others; the changes must first be approved. Commits involving multiple artifacts are atomic in that all the artifact updates are approved or rejected together.
If two users check out, modify, and commit the same artifact, what happens? A user whose role is assigned reviewer privileges is notified of the two commits. The first commit to be approved invalidates the second commit. The “losing” user then must synchronize the approved changes with the original checkout and then recommit the changes for approval.
Models, Scoring Flows, and Data Sources are controlled through the TIBCO ModelOps system.
Build new artifacts from scratch.
A configurable and deployable component that maps between an external protocol and scoring flows.
A deployed and configured data channel ready to consume or produce data.
A data channel configuration that includes a unique identifier, a data schema, and searchable tags.
A data channel that consumes output data with a known schema and a standard serialization format. Scoring flow results are sent to a data sink.
A data channel that provides input data with a known schema and a standard serialization format. Scoring flow input is read from a data source.
Loads a decision table artifact to a running decision table EventFlow module in StreamBase Studio, or loads a model file artifact to a model operator EventFlow module in StreamBase Studio.
Diff/merge is used when synchronizing to apply the approved changes made by another user to the local changes made to the user’s checked-out copy of the artifact.
Discards your changes in the checked-out copy of the selected artifact. Discard essentially “un-checks out” the artifact.
Make changes to projects and artifacts.
A named collection of resources required to execute a scoring pipeline. Environments are logically isolated from each other. They are also used to support the promotion of a scoring pipeline from one environment to another. For example, from Development, to Testing, and then to Production.
A data that was previously captured from an action or process and stored in a durable storage mechanism, for example, a file or a database as against real-time data.
TIBCO ModelOps maintains a log of approved changes to each artifact and labels each change with a version number. Version numbers start with one and are incremented by one for each new approved revision of the artifact.
Add single or bulk artifacts from a source external to TIBCO ModelOps.
A uniquely identified context started by the scheduling service that manages one or more tasks.
A mathematical specification of an analytical process to predict an outcome based on a set of input data. Example specification languages are PMML, Python, and TensorFlow.
Evaluating the performance, both technical (resource consumption, latency, and availability) and accuracy (variance from expected, business impact, and fairness), of a model. Model monitoring can occur in real-time using real-time metrics, or after the fact using historical data.
An abstraction layer in a scoring service to host execution of different model types.
An individual action in a scoring flow. The input to a processing step is all output data from the previous processing step. The processing step can transform, or augment, the input data before sending it to the next processing step. A processing step can access external services, for example, a scoring service, to perform its function.
Project is a container for artifacts.
To submit modified artifacts from a sandbox space for approval. Once approved, the modified artifacts are in the published space.
A space containing approved artifacts. Artifacts in the published space are visible to all users with the required permissions.
A data that is available immediately following an action or process, for example, a Kafka message. Also called streaming data. Contrast with historical data.
Qualitative values captured during scoring pipeline execution providing Key Performance Indicators (KPI) that can be acted upon to improve quality-of-service.
A user assigned to a role with reviewer privileges can reject another user’s committed changes. In that event, changes remain local to the user performing the commit. A reject operation involving multiple artifacts is atomic.
Replaces an artifact with one uploaded externally, or a newer or older version stored in TIBCO ModelOps.
Data available after scoring flow execution for post-processing analysis. Result data is available from data sinks.
Schedules and manages jobs to start a scoring pipeline using a scoring pipeline.
It is the formal definition of the structure of data, defining types, constraints, and cardinality.
An ordered sequence of processing steps that operate on data received from a data source and sent to a data sink. The data flowing through a scoring flow can be transformed and augmented by processing steps.
A data source, a data sink, one or more scoring flows, and zero or more models used in a scoring flow. A scoring pipeline is started by the scheduling service as a job.
Manages execution of models in the context of a model runner.
A container for projects. There is a Sandbox Space and a Published Space.
This command retrieves another user’s committed and approved changes. Synchronizing allows the option to apply the changes to your local working copy of the artifacts. This command also applies to externally stored artifacts and updates ones stored in the TIBCO ModelOps repository with the external versions.
Execution context for a scoring pipeline or a data channel instance. The execution context consists of the cloud resources required to execute a pipeline or a data channel.