Our organization sets most GDL mime type files to open in HTML Content Viewer (Word, Excel, PDF). We appreciate all images opening in tabs within HTML CV rather than downloading the files and having multiple native applications open in separate windows. We do have one issue: When HTML CV fails to display a file we have no way to view it. We need a way to download the file when HTML CV is unable to display the image / file. This can similar to how HTML CV handles unknown file types or password protected files. A prompt opens allowing the user to download the file. Alternatively the user might be able to select download from the source menu.
... View more
When a user tries to open an email (.msg) file uploaded to Chorus, the Content Viewer (CV) window opens but the email file is not displayed in the CV window, rather the file is downloaded and then the user is prompted, by the browser, to open the file. This is OK unless the email file in question is large (e.g. contains a large attachment); when the user tries to open such a file, the CV window opens but in chromeless mode, meaning there is no visible cue that the download is running and there is a risk that after a period of time with no file opened and no cue that anything is happening the user will close the window and report that the file cannot be opened. Is it possible to show some sort of status bar in the CV window to indicate that the download is in progress? Either a custom bar or the built-in browser status bar (I believe this would require the CV window to be launched with a flag that forces download progress to be shown in the status bar).
... View more
Users have ability to suspend BUT they can use the suspend feature to take work out of quality by Suspending and then having an activate status that wakes up in a different queue. Essentially bypassing quality. We would like the suspension feature to have additional access to restrict the activation status option or ability to add an activation status as addtional resource. Thank you
... View more
Chorus is not able to send a dynamic JSON array (that contains the object keys to delete). The length of the array is defined at design time, so in runtime there is no apparent way to populate an array with a dynamic number of object keys to delete. That is, there is no dynamic array capability within Chorus which presents a challenge for batch Chorus objects deletion.
... View more
At present, there is no option to set timebound retries in the service cog configuration. Retries can be set but it is quite arbitrary, so cannot anticipate what the appropriate number should be. Also, need to consider the performance/traffic overhead for implementing such a retry mechanism. In summary, there is no apparent way to put a small interval between retries.
... View more
The application passes sensitive parameters in the URL line of many of the requests. BUSINESS IMPACT: Sensitive data could be disclosed unintentionally through transmission in the URL. Description The application uses the URL to pass sensitive data from the client to the server. Data passed in the URL can be exposed because data passed in this manner ends up in unintended locations. These locations can include server logs, local browser history, and proxy logs. Reproduction Steps Using Burp Suite, navigate through the application. The application passes sensitive parameters in the URL line of many of the requests Affected Locations HUB v4.7 and Interact 4.7 Decipher v2.3 Recommendation When sensitive data is sent, it should be ensured that POST requests are used instead of GET requests. POST data is not treated the same way as URL data is when requests are sent through systems and do not typically get cached or logged. If there are technical constraints that require data to be sent in the URL then strong encryption should be used to encrypt values. It should be ensured that part of this encryption scheme contains protection against data replay so that captured cryptographic values cannot be replayed back to the server. References Information exposure through query strings in url | OWASP Foundation A02 Cryptographic Failures - OWASP Top 10:2021
... View more
The search drop down list within Processor Workspace is forced to be in alphabetical order. We are given the ability to specify the order that lookups are in within the LOOKUP resource parameters list, so the drop down should obey this order. Having frequently used searches at the top of the list is more streamlined and user friendly for the end users. This used to be functionality that existed but was taken away.
... View more
When a process model is deployed the only place that the details of who deployed it are located within the Design workspace. If a model is undeployed there is no record of who did it or when. This can cause issues if a process model is accidently undeployed. Having who made the change in the Admin Audit Trail would be helpful in locating who made the update and react accordingly.
... View more
Having upgraded from AWD ViewStation to Chorus BPM, some of our processors are struggling with the fact that comment templates no longer exist. They now have to copy/paste their standard comments from a separate document into Chorus and feel that comment templates would streamline their processing.
... View more
Processors are finding it difficult to search for currency values with Quick Search because the lookup is comparing the user's search value to the raw database value. E.g, if a LOB field is defined with a max length of 15, a user wanting to search for $500.50 would need to enter 000000000050050. Since processors don’t have access to field definitions, they don’t know how many leading zeros to add. Suggestion: Check for currency fields and trim leading zeros on the backend.
... View more
Feedback from our processors: Request: Using the quick search lookup we are able to save lookups, however if one of the ‘Fields’ need updating, we cannot edit it, we have to delete the field from the query and re enter it. Are we able to make it editable instead. This will allow for day to day search to be made easier, particularly ones that the date needs to be altered every day for – streamlining our processing times.
... View more
Add visual markers or color-coded highlights in the timeline to indicate where filter-triggering events occur, enabling users to quickly identify and navigate to these critical points without manual scanning. Without this feature: currently Users have to manually review each timeline to locate trigger events, leading to inefficiencies, missed insights, and reduced satisfaction with the tool. Goal: enable faster and more accurate identification of filtered events, improving user experience, boosting adoption, and accelerating decision-making.
... View more
Introduce functionality to filter timeline events based on designated points, allowing users to view only events that occur before or after a specified event (e.g., show the journey after a letter X is issued). Without this feature: users face difficulties isolating relevant parts of the timeline for analysis, leading to slower insights, increased manual effort, and a less efficient user experience.
... View more
Allow users to assign multiple categories to a single event in the timeline, enabling richer classification without the need to overwrite or remove existing tags. For example, categorize an event as both "FTE event" and "legacy tag applied." Without this feature: Users must repeatedly create or edit categories for different analyses, leading to inefficiencies, reduced flexibility, and potential loss of critical context for multi-faceted events. Goal: enhance event categorization flexibility, allowing users to perform diverse and overlapping analyses seamlessly while preserving existing tags for future use.
... View more
Enable the creation of metrics to count the number of times a specific pattern (e.g., X → Y → Z) occurs across all timelines, rather than limiting the count to the number of timelines where the pattern exists. Without this feature: Users cannot accurately quantify the frequency of specific patterns within timelines, leading to incomplete analysis and missed insights into recurring behaviour. Goal: provide detailed insights into pattern frequency across timelines, improving the depth and accuracy of process analysis in the Predecessor Analysis module.
... View more
Extend the TO DO List operations to allow replacing multiple substrings with the same new value. Users can input multiple strings to be replaced (e.g., separated by commas) or use an updated UI that supports selecting multiple options for streamlined bulk replacements. Without this feature: users must perform repetitive replacements one substring at a time, leading to inefficiencies, increased manual effort, and a higher risk of errors when managing large datasets. Goal: simplify and accelerate substring replacement tasks by supporting multiple options in a single operation, improving efficiency and user experience for bulk data management.
... View more
Add the ability to use logical operators (AND, OR, NOT) in derive field conditions, allowing users to define more complex and precise criteria for deriving new fields in the dataset. Without This Feature: Users are limited to basic conditions, requiring workarounds or multiple iterations to achieve desired results, leading to inefficiencies and reduced flexibility in field derivation Goal: enable more advanced and flexible field derivations by incorporating logical operators, enhancing analytical capabilities and reducing manual effort.
... View more
Introduce a percentage format option for derived metrics when shown in Side by Side module, allowing users to display results as percentages (e.g., 92%) instead of decimals (e.g., 0.92) in the modules. Currently, when derived metrics or KPIs used in Side by Side, we cant use number formatting to achieve percentage values Goal: this would be improving data presentation and user experience.
... View more
Include a breakdown table in the Predecessor Analysis module to provide detailed tabular insights, such as the frequency of each predecessor pattern, associated attributes, and their distribution across timelines. Currently Breakdown feature used in Predecessor module do not have table which gives values in tabular format allowing users to copy these values easily(which is possible in Breakdown module) Goal: this will help for enhancing analysis depth, enabling better comparisons, easy copy data from tabular format.
... View more
Allow users to open and apply saved queries from the Query Module directly within the Predecessor and Classifications modules, enabling easier access to previously defined queries for streamlined analysis. Users must manually recreate queries for different modules, leading to inefficiencies, redundancy, and potential errors in replicating complex query conditions across modules. Goal: improve workflow efficiency and consistency by enabling users to quickly access and use saved queries across different modules, reducing repetitive tasks and enhancing analysis speed.
... View more