How can I upload or send documents for processing?
You can upload documents to the super.AI platform either directly through our user interface or by using our API.
🔗 For more information, see Uploading Data.
How do I correct extracted outputs?
It's simple to correct outputs on the super.AI platform. Navigate to the work queue and select “Review”. This will direct you to the review screen.
🔗 For a comprehensive guide on how to correct outputs, see Reviewing Extracted Data.
How can I download extracted data?
You can download the extracted data in standard CSV/JSON formats through our user interface.
🔗 For more information, see Downloading Extracted Data.
How can I include HITL as a mandatory manual step?
To include HITL as a mandatory manual step, choose the “Collaborator Worker” option during setup.
What is the maximum size for a single document?
The maximum size for the contents of a single URL is 50MB.
🔗 For additional details, see Input Requirements.
How does super.AI enhance data extraction accuracy?
Super.AI utilizes a benchmark set of ground truth data to evaluate the quality of each labeling method. Whether AI models or human workers, each labeling method is subjected to various accuracy checks. Based on the quality scores of each labeling source, future labeling tasks are routed to the optimal AI or human workers. This systemic approach eliminates subpar performance and improves data extraction accuracy continuously.
To ensure high performance from the onset, we train and monitor our Data Processing Crowd. Additionally, Super.AI deploys specialized models tailored for specific tasks. For instance, when detecting stamps or signatures in invoices, we employ dedicated stamp or signature detection models and integrate their results with the general invoice model.
How long does it take to process a document?
Super.AI processes documents with an approximate speed of 60 seconds per page.
How many documents can be included in a batch?
The number of documents in a batch is constrained by how many URLs can fit within a 50MB CSV file, which is our maximum input size. Consequently, the exact number may fluctuate based on the size of individual URLs.
What's the maximum size for a batch?
The entire batch should not exceed a total size of 50MB, based on the combined content of the included URLs. This means that the aggregate size of the CSV file, irrespective of the number of URLs, should remain under 50MB.
Are there limitations on parallel batch processing?
There's a default constraint that permits processing up to 20 documents simultaneously. Should you need a higher capacity, please contact [email protected], and we can tailor the limits according to your requirements.
How does the system respond to an initial batch error or rejection?
The subsequent action upon detecting a batch error is contingent on the error handling preferences set during batch submission:
- Skip data points with errors: If chosen, any data point encountering an error (such as issues retrieving URL contents) will be omitted, and the system will continue processing the rest of the valid entries.
- Reject all data points if one or more data points encounter errors: Under this setting, if even one document within the batch experiences an error, the entire batch will be declined, preventing any of the documents from entering the work queue.
🔗 For an in-depth guide on batch status retrieval and additional insights, see Retrieve a Batch Object.
Updated 2 months ago