Skip to main content

Reference Documentation

Access token - configurationโ€‹

External Application Client creation and configurationโ€‹

For detailed instruction go to official documentation

External Application Client Creation and Configuration To authenticate with context-api, an External Application must be created in Hyland Experience Admin Portal:

๐Ÿ”— Portal: https://admin.experience.hyland.com/

Steps:โ€‹

  1. Create a Service User via the Identity tab.
  • Go to Identity โ†’ Service Users
  • Click Create Service User
  • Fill in required details
  1. Register a new External Application:
  • Go to External Systems -> click Create External Application
  • Select Service Application
  • Fill Application name
  • Choose created previously Service User
  • Select allowed scopes listed in the table below

๐Ÿ”‘ Scopes for External Applicationsโ€‹

This section describes some required scopes for external applications (more in documentation).

ScopeDescription
environment_authorizationAllows authorization checks to be performed at login before APIs are called in the context of an environment. Adds permissions and roles granted to the user in the configured environment and application to the token as claims. An associated environment and application in the subscription must be configured.

โ—Save and copy the Client ID and Client Secret โ—โ€‹

๐Ÿ“Œ Client Secret will be visible only once. Store it securely.

Access token requestโ€‹

To obtain access token for context-api, consumer must send POST request to the following endpoint:


curl -X POST https://auth.iam.experience.hyland.com/idp/connect/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "client_id=<client_id>" \
-d "client_secret=<client_secret>" \
-d "grant_type=client_credentials" \
-d "scope=environment_authorization"

Context API - REST API definition and code samplesโ€‹

API Definition can be found here

โš™๏ธ Requirements for actions on processing requestโ€‹

Depending on the selected subset of actions passed in the /process request, additional fields may be required. The table below summarizes all required properties for each supported action:

ActionInput File TypeAdditional RequirementsResult Schema
image-descriptionimageAll objectKeys must point to image filesstring
image-metadata-generationimageAll objectKeys must be image files
kSimilarMetadata must be provided and contain non-empty items
Dictionary<string, object>
text-classificationtextAll objectKeys must be text files
classes must contain at least 2 distinct non-empty entries
string
text-summarizationtextAll objectKeys must be text filesstring
image-classificationimageAll objectKeys must be image files
classes must contain at least 2 distinct non-empty entries
string
image-embeddingsimageAll objectKeys must be image filesList<float>
text-embeddingstextAll objectKeys must be text filesList<List<float>>
named-entity-recognition-imageimageAll objectKeys must be image filesDictionary<string, List<string>>
named-entity-recognition-texttextAll objectKeys must be text filesDictionary<string, List<string>>

๐Ÿ“ค Notesโ€‹

  • If no classification actions (image-classification, text-classification) are specified, classes must be null or empty.
  • If image-metadata-generation is not specified, then kSimilarMetadata must be null or empty.
  • All objectKeys must be distinct and use valid formats (e.g. .png, .jpg, .txt).
  • If an action is specified with an invalid file format, the request will be rejected with validation errors.

๐Ÿงพ Results for actionsโ€‹

Different actions may return different types of result objects depending on the performed analysis.
The table below summarizes the output schemas returned by each action:

ActionResult SchemaDescription
image-descriptionstringA short description of the image content
image-metadata-generationDictionary<string, object>Key-value representation of predicted metadata for an image
text-classificationstringPredicted class label based on the text content
text-summarizationstringGenerated summary of the text
image-classificationstringPredicted class label based on the image content
image-embeddingsList<float>Vector representation of the image
text-embeddingsList<List<float>>Vector representation of the text, typically per sentence
named-entity-recognition-imageDictionary<string, List<string>>Detected named entities extracted from image content
named-entity-recognition-textDictionary<string, List<string>>Detected named entities extracted from text content

๐Ÿ“ค Notesโ€‹

  • All result fields follow the common ProcessingResult<T> structure:
    {
    "isSuccess": true,
    "result": "<T>",
    "error": null
    }

โ— Error Handlingโ€‹

In case error occured for any document processing or action appropriate errors objects are stored in place of action result. Each action tpe error have follows same schema.

Format of a Failed Actionโ€‹

If any action fails, a ProcessingResult<T> will contain IsSuccess: false and Error object instead of the expected result.

{
"isSuccess": false,
"result": null,
"error": {
"errorType": "UnexpectedError",
"message": "The given key 'statusCode' was not present in the dictionary."
}
}

Example get resultโ€‹

{
"id": "156e4a8a-e9c8-41c0-95f0-7039982236f3",
"timestamp": "2025-03-24T15:45:14.104323+01:00",
"results": [
{
"objectKey": "testing/documents/fedex.pdf",
"imageDescription": null,
"metadata": null,
"textSummary": {
"isSuccess": false,
"result": null,
"error": {
"errorType": "UnexpectedError",
"message": "The given key 'statusCode' was not present in the dictionary."
}
},
"textClassification": {
"isSuccess": true,
"result": "invoice",
"error": null
},
"imageClassification": null,
"textEmbeddings": null,
"imageEmbeddings": null,
"generalProcessingErrors": null,
"namedEntityText": null,
"namedEntityImage": null
}
],
"status": "PARTIAL_FAILURE"
}