dimanche 21 décembre 2014

SQL / Postgres, big table vs multiple tables, how to performance test?

I'm working on a Postgres database where I've got some records that will be queried / accessed all the time, and they have a open-ended amount of optional "metadata" what will be collected over time.


To illustrate the idea, consider a case like a national car dealership network. You could have a table like:



Vehicles
--------
id
type
status
location


That information is all required and needed in just almost every query. But there's a bunch of optional extra information like make, model, year, mileage, color, previous owner, MSRP, blue book value, etc etc.


Those could all just be fields on the Vehicles table, or they could be fields in a different table like VehicleMetadata, with a foreign key pointing to the specific vehicle.


As a SQL layman, I don't really know what difference doing this in one table or two tables will make. I'd be interested to understand:




  1. As a db guru, how would you compare and test these different schema options and determine which performs best?




  2. Is one approach or the other generally considered more conventional or standard practice?




Thanks!


Aucun commentaire:

Enregistrer un commentaire