Here’s another Database Design Mistake to Avoid
Once I saw a table that clearly had too many fields in the primary key.
For every field that you add to the PK, the maximum possible number of rows in the table increases.
Maximum possible rows in the table =
(Unique Count of Field1) x (Unique Count of Field2) x (Unique Count of Field3)
In the case I saw, the key was intended to be a code, and the time, in seconds. For any one code, there should have been a maximum of 86,400 possible rows (seconds) per day. In actuality, there would have been only a few dozen rows at the very max each day.
By including a third field in the PK, the maximum possible rows per day now became 86,400 squared.
Which is, approximately 7.4 billion rows, per code, per day, would be allowed!
The problem result: Duplicate data.
Multiple rows with the same code and timestamp in the first two fields were inserted, that caused problems.
How did the rows get into the table? Well, the primary key allowed it.
We had to manually delete the duplicate rows. Which, defeated the purpose of automation.
Remove the second date field from the Primary Key, and make it, Not Null, if it is required to be populated.