Any analysis is only as good as the quality of the data on which it is based. Thorough data preparation can be achieved with little effort at the beginning and saves many times the invested time and costs. Incorrect data attributes during the advanced process reduce the possibility or even make it impossible to make corrections to the data at a later stage. During processing, data problems occur that could have easily been detected and corrected by the data owner at the beginning of the process. Onedot has found that often not enough supplier data is available for data processing, or that the data to be processed is not available in the right data quality. In order to make the procedure easier and quicker for both sides, the data must be checked. This is how good input data is created before the data is uploaded to Onedot.
In 8 steps to good supplier data:
Each supplier uses a data model whose data is rich but not necessarily consistent with the data systems on the customer side. As soon as you request all records of the product data, they will be gladly delivered. Columns must not be filtered away, and clean-ups must not be carried out in advance. This is important for the enrichment of the data, because with more data better results are achieved.
Various simultaneously sent data formats are no obstacle for Onedot. The files should be in your preferred format (structured and semi-structured files such as CSV, XLSX, JSON, XML, BEMCat, or PRICAT) and uploaded. This increases the speed of processing.
It is easier to edit the files if they have a clean structure. Make sure that the first line contains the names of the product attributes and that they do not contain special characters or umlauts. Make sure that the data starts directly on the second line. This ensures that no unwanted and unexpected quality problems occur in the finished result.
Are all data exports really needed? For example, are production data and project lists with deadlines essential for the process? Delete data records that can be dispensed with. This selection process reduces misunderstandings and helps the Onedot software to gain the right insights.
Not all data is the same. Does the data correspond to the desired accuracy? Are there alienated or unreadable parts of the data sets? Take a brief look at the accuracy and adjust the data. This quality check will help to achieve a more accurate result.
In general, valid data should be uploaded: same data type, certain maximum values, complete data sets, uniform data with the same unit (e.g. price, currency, weight, mass). This enables a better allocation.
How do you deal with missing values? Basically, each data row should have either a value or a marker for missing values such as N/A. (no specification) or n/a (not applicable). This facilitates data processing.
The files should be free of access restrictions. Please remove all passwords, formulas, rules and formatting. Then Onedot data specialists can open the files correctly and format them faster. These steps help to prepare the data optimally. Companies today depend on high-quality, error-free data. This is not only true for data-intensive areas like banking, insurance, retail or telecommunications. Scrubbing data is essential when processing data from a database that is incorrect, incomplete or insufficiently formatted. It works more accurately with pre-prepared data. Scrubbing through vast amounts of data manually is a difficult, tedious task and is extremely error-prone. That is why Onedot uses AI (artificial intelligence) based data processing, as this is the only way to systematically and conscientiously check data for errors. With rules and AI algorithms you get clean, consistent and usable data.