This is a troubleshooting guide for everything related to State/Known Data Sets.
Some of the student records from my file aren't showing up in DnA, what's going on?
There are a few different things that might be causing the problem, so it's best to check them off through a process of elimination. We'll take them one at a time:
The Student Doesn't Exist

Of course, in the most basic existential sense, we have no doubt that that this student exists. However, the first question to consider is, "Does the student exist in DnA?"
An admin-level user who has the appropriate permissions and affiliations can confirm this through Search 2.0, by first setting the "Status" to "Any Enrollment at." No dice? Then try running another search, this time for "No Enrollment data."
If the student doesn't turn up through either search, then there's a good chance that he/she doesn't have a profile in DnA, and State/Known data cannot match records to students who don't exist in the system. This is because (1) a studemo.txt file was never imported for the student(s) in question (DnA clients), OR (2) in the case of an ISI client, the student's profile was never created in the DnA UI.

ISI only - Another searching option is to use the Create New Student page, which has a Pre-Search feature that can pull all records (regardless of enrollment or lack thereof) of students with a given name.
When in doubt, somebody with a little SQL knowledge and access to our databases can query the public.students table to confirm.
A Problem with Enrollment
Okay, so you've confirmed that the missing student(s) do have profiles in DnA–what next?
It's important to note that, when you are logged into DnA, the student data that you are able to view is controlled by the Enrollment/Rostering date that is set in your Control Panel. Because of this, if you have students who do not have enrollment data for the session in which your Control Panel is set (or any Enrollment data at all), you're not going to see them in any of the Assessments or Reports you look at, including those that feature State/Known data. So, it's always advisable to check for an Enrollment issue.
Rather than switching your Control Panel back and forth through various past and future enrollment dates, the quickest way to check for an Enrollment issue is through a Custom Report.
1. First, find the Assessment meta data that contains your State/Known Data via the View Assessments page.
2. Click on the title of the assessment to open it.

3. Click Duplicate as Custom Report.

4. After selecting your new Custom Report, go to Filters and filter the "Date Imported into DnA" column to equal the date that you imported your records. If this column doesn't exist in the Custom Report, that's not a huge problem, it'll just make things slightly more complex later on.
5. Then, navigate to the Advanced menu and select Students.

6. To begin, you want to check for all Enrollments, so select Any Enrollment at as the Visibility style. Then, view your report again and look at the total number of records. Recall from above that Search 2.0 featured this same option.
7. After that, you'll need to check for students who have no enrollment. Do this by switching the Visibility style to No Enrollment data. As before, view your report and note the total number of records.
Taken together, the total number of records between those with "Any Enrollment at" and "No Enrollment data" should equal the total number of records that were imported. If, in Step 4, you weren't able to filter the Custom Report on the "Date Imported into DnA," you'll have to keep in mind that this total number will take into account all records imported for this assessment, by any user at your Instance (not just the most recent import that you ran).
If you find that the total number of students (with "Any enrollment at" and "No Enrollment data" added together) does equal the total number of records that have been imported for the assessment, then you've successfully isolated the problem to enrollment. All the records that have been imported successfully matched to students in DnA, and any issues associated with viewing that data must be addressed by the Enrollment data.
A Problem with the Data file

If you've been able to rule out an enrollment issue, then the next place to look for a problem will be with the file itself.
When looking at the file, a key question emerges: "How do records get matched to students in DnA?" In other words, how does the import tool identify the correct record in the import file to match to a particular student?
The short answer to this question is that the import tool either uses the SSID (State Student ID), the Local Student ID, and/or a unique combination of a student's Last Name, First Name, and Birth Date. However, the long answer is that, for each import tool, the answer to this question will be different. Not all assessments have a field where a Local Student ID might be found, and thus the import tool doesn't match on the Local Student ID (in DnA). Alternately, a different assessment might have a field that matches to the SSID in DnA, but the assessment doesn't refer to that field as "SSID," but as something different–something cryptic and very not-obvious, such as "User Defined Field #3."
To sort out that confusion, you'll need to know a couple things:
- What fields does the import tool use, from the file, to match records to students in DnA?
- What are those fields called? Where are they located?
- Which specific pieces of student data do those fields match to?
To answer these questions, each assessment has a Help document dedicated to it by State, and the question you're looking for, once you find the Assessment in the document, is "How Are Records Matched to Students in DnA?"
You can find this page, for each assessment, by navigating to DnA System Admin > State Known Data Sets > ______ State Assessments. There, you can open up the specific State you are working with, then the locate the specific assessment in question.
To see how this works out in practice, we'll use the specific example of Arizona's AZELLA. We'll take the 2015-2016 version, as illustrated above.
Each import tool matches records by linking certain fields of student data in DnA (e.g. State Student ID, Local Student ID) to certain fields from the file (in the case of AZELLA 2015-2016: SAIS, School ID). The matching happens in a sequential order:
- 1st: The tool will look at column #25, titled "SAIS," and look for a State Student ID in DnA that carries the same value. If that fails, then . . .
- 2nd: The tool will look again at column #25, "SAIS," only this time looking to match it to a Local Student ID in DnA of the same value. And if that fails . . .
- 3rd: The tool will look at column #26, "School ID," and try to match it to a Local Student ID in DnA of the same value.
(NOTE: If the import tool requires a "fixed-width" file, then the location of these matching columns is given as a position range (e.g. 128-135), rather than as a specific column number.)
If all three of the above scenarios fail, then the record will not get matched to a student in DnA. It doesn't matter what kind of tricks you try with Enrollment/Rostering dates – if the matching fails at this level, then the imported record can't be seen in the DnA UI–full stop. Keep in mind, this doesn't necessarily mean that the record didn't import at all. The record may very well exist in our database. But, because it's not matched to any students, it's just sitting there in a fairly useless state.
Now, you're ready to look at the file itself.
- In the scenario above, for any student records that you can't find in DnA (after eliminating Enrollment as a potential problem), you'll want to locate columns 25 and 26.
- Does the student record in question have any data in one, or both, of those columns? If so, does the data in those columns match the SSID and/or Local Student ID that you see in DnA? If the answer to either of those questions is "No," then it's beginning to look like the data file itself has a problem.
- If both columns 25 and 26 are completely empty for one or more students, we might ask: why did the vendor (the Arizona Department of Education, in this case) supply records that did not include SAIS or School ID?
- If the data in columns 25 and/or 26 is different from the State Student ID or Local Student ID in DnA, we might ask: are those pieces of data correct in DnA? If they are, why did the vendor provide incorrect data? Our solution to the problem will vary, depending on the answer to those questions.
Everything looks good, but the records still don't match, why?
It's hard to say but, if you've come this far and you still aren't able to isolate the issue, then it may be a bug on our end.
"What do you mean by 'correctly open?' Why can't I just open it like any other file?"
A: You can't open it like any other file because Excel likes to auto-format things.
For various reasons, some of our clients want to open and edit their State/Known data files before importing them into DnA (perhaps because the test Vendor doesn't provide SSIDs in the file, and the client needs to add them). Whatever the reason, opening the file and editing it before importing isn't inherently problematic–as long as it's done correctly. A few simple steps, outlined below, shows how to properly perform this operation.
1. Begin by opening a blank Excel sheet.

2. Select Data > Get External Data > Import Text File.

3. Choose your File > Get Data.

4. Choose your file type. In this example, we have a CSV, so we're choosing "Delimited."

5. Choose the delimiter type. In the example above, I have a CSV, so I'm choosing "Comma" as the delimiter.

6. While holding the SHIFT key, move the cursor in the "Data preview" window all the way to the right, then LEFT CLICK the last column. Then, select "Text" under the "Column data format" and click "Finish."
And Presto!
You now have a delimited text file that has been opened in Excel, and none of the original data from that file has been auto-formatted or otherwise altered in the process.
Great Question! And the answer is . . .
We don't know.
Well, that's not entirely true. The better answer is . . .
It's depends on which assessment we're talking about.
Not only that, but it also depends on which year of which assessment we're talking about. For example, the answer to our question would be different between SAT 2011-2012 and SAT 2015-2016.
Why is that? Why can't we just make it the same for every assessment, and every year of every assessment?
One reason is because each assessment has a different layout. Additionally, the layout for a single assessment might change from one year to the next. Another reason is because each assessment is reporting on different data, and a situation where we have duplicate records for one assessment might be perfectly acceptable and expected, and yet for a different assessment, we would not expect there to be any duplicates.
Fine then--I have a specific assessment in mind, so how do I figure it out?
You can do that right here, in Help, by navigating to DnA System Admin > State/Known Data Sets.
Once there, you'll find all the states for which we have import tools listed. Under each state is each assessment for which we've built an import tool. And, under each assessment, you'll see the question "How Are Unique Records Identified?".

Okay, I'm there. But, I don't understand what this page is telling me...
Understanding what the "How Are Unique Records Identified?" page is telling you will require a bit of explanation. So, let's look at Arizona's AZELLA assessment as an example.
Above, we see that we have import tools available for 4 different years of the AZELLA assessment (2012-2013, 2013-2014, 2014-2015, and 2015-2016).
Let's start from the top: 2015-2016. What this page is telling us is that, for the AZELLA 2015-2016 import tool, we use a column called "Doc UIN," which is the 7th column in the file, to identify unique records.
What do you mean by "identify unique records?"
Each import tool is built with certain columns selected as "keys." These keys are used to determine, upon import, which records are unique. Obviously, if you've never imported AZELLA 2015-2016 data before, all records will be unique, because you've never imported any records for that particular assessment yet. However, if you're importing AZELLA 2015-2016 data for a 2nd or 3rd time, and some of the same students are involved in those subsequent imports, then the "key" column(s) are going to become very relevant. To illustrate how this works in simple terms, let's look at a fake assessment, which has two columns being used as unique keys, and find out what happens when we import more than once.
Above, we are importing two records for our Fake Assessment 2015-2016. We've never imported Fake Assessment 2015-2016 before, so both of those records will be unique, and both of them will get written onto the data table. So, if you look at a Custom Report in DnA that uses this data, it would match the above table exactly.
Next, we decide to import another Fake Assessment 2015-2016 file. We're importing for the same two students again, but this time a couple of things have changed. If we want to know whether or not these new imports will create duplicate records, or whether or not they will update/overwrite the existing records, we need to pay attention to what happens with our "key" columns, which are Student SSID and Date Taken.
So, we imported 4 records total (2 the first time around, and 2 the second time around). Yet, in our Custom Report, we show 3 records.
What does this tell us? This tells us that, when we ran the import the second time, one of those records became a duplicate, while another one of them became an "overwrite". When we examine the Custom Report more closely, we see that this is exactly what happened. Student Tyler Ulis had his original record overwritten, while student Jamal Murray had a duplicate record created. To understand why it went down this way, you have to pay attention to what's happening in those "key" columns (highlighted in green). Let's examine each student individually:
Tyler Ulis:
When we first imported for this student, we see that he has a score of 85 (a "B"), when he took the test on 03/14/16. When we imported the second time, both the SSID and the Date Taken were the same. This means that our import tool is going to treat this second import as the same record as the first, or not unique. The way to interpret that data might be that, the first time around, Tyler was given an incorrect score, and the second import is his corrected record (in other words, he actually got a "92" when he took the test on 03/14/16).
Jamal Murray:
When we first imported for this student, he scored a 73 (a "C"), when he took the test on 03/14/16 (the same date as Tyler Ulis). When we imported the second time, however, the Date Taken had changed. This means that our import tool is going to treat this second import as a unique record. It's not the same record as our first import, so we don't want to overwrite that first import. The way to interpret that data would be that, unlike Tyler, when Jamal took the test on 03/14/16, he was indeed scored correctly, so that record didn't change. But, also unlike Tyler, Jamal took this assessment a second time to see if he could improve his score–and indeed he did! He scored a 95 (an "A") when he retook the test on 04/15/16. As such, stakeholders may want the ability to see both test administrations for students who take the test more than once. By making Date Taken a "key" column for identifying unique records, our import tool makes sure that we don't overwrite records for students who take the test more than once.
Going forward...What happens with any future imports for these two students will depend on what happens in those "key" columns.
If something changes in either the Student SSID or Date Taken columns, then the record will be a duplicate. If not, it will overwrite. Simple as that.
This feature will verify if upload file matches the number of fields/characters we expect, and will kick back an error if not.
These are all of the data sets that are affected:
CAASP, CAHSEE, CALPADS AISP, CELDT, PFT, STAR, Preliminary Science CAPA 2015-2016, Preliminary Science CMA 2015-2016, Preliminary Science CST 2015-2016, Smarter Balanced Assessment 2014-2015 Interims, Smarter Balanced Assessment 2014-2015 Summative (Preliminary), Smarter Balanced Assessment 2015-2016 Interims, Smarter Balanced Assessment 2015-2016 Summative (Preliminary), Smarter Balanced Assessment 2016-2017 Interims