Since the beginning of the modern era of computing–started around the 1970s–the data model has been just the same as the model used to persist the data on some (relational) database. The point made two decades later by DDD (Domain-driven Design) proponents sounded like a curse.
Keep domain models agnostic of database concerns
That reminded me the blurred stories of those early MS-DOS developers who first heard from Microsoft Windows tech specialists that the next version of the operating system would have managed the memory on behalf of coders!
You cannot be serious.
A domain model represents the concepts and entities within a specific problem domain. It is essentially a conceptual model of the problem space, which defines the structure, behavior, and relationships of the entities within that domain. A domain model typically includes entities, value objects, aggregates, and the relationships between them. They serve as a common language between domain experts and software developers, helping to bridge the gap between business requirements and technical implementation.
How does this model relate to database data model?
In a significant number of cases, there’s no relevant difference and two models conceptually match up quite closely. In another good number of situations business specific workflows force to use an intermediate layer of software objects that just manipulate data agnostic of their actual storage and structure.
The approach to take is as simple as doing what best serves your scenario–as foregone as it may sound. If you feel confident to keep all read/write logic in the SQL layer, do that; if you prefer to use more flexible coding via C# and abstraction over SQL nitty-gritty details, do that. It’s never a matter of determining who’s right and who’s wrong.
By keeping the domain models separate from database-specific concerns, developers can focus on modeling the domain accurately and efficiently, while also ensuring that the application remains adaptable to future changes in the database technology landscape. However, achieving such a neat separation is NOT free of charge.
Using handwritten SQL takes you to mince data items to the level of primitive types and move clumps of data together in and out of the SQL layer
Using an abstraction layer over SQL (i.e., an O/R mapper like EF Core or Dapper) takes you to plan two distinct models–one that is 1:1 with the physical columns and tables and one that exposes stored data to the upper layers of the code in a more natural format.
If you work with two models, you need also to have a third element–mappers between them.
How purist are you?
For me, pragmatism always wins and one model sufficiently flexible to serve the need of the app but still well-aware of its own need of persistence (and then in a way “impure”) is the preferable approach. To some extent, EF Core (and .NET Reflection) favor this approach.
No data model exists without the necessity of being persisted.