For day 21 of the Blogging A-to-Z challenge I'm going to wade into a religious debate: UUIDs vs. integers for database primary keys.
First, let's define UUID, which stands for Universally Unique Identifier. A UUID comprises 32 hexadecimal digits typically displayed in 5 groups separated by dashes. The actual identifier is 128 bits long, meaning the chance of a collision between any two of them is slightly lower than the chance of finding a specific grain of dust somewhere in the solar system.
An integer, on the other hand, has just 32 or 64 bits, depending on the system you're using. Not only do integers collide frequently, but given that incrementing integer keys typically start at 1, they collide all the time. Also, using an incrementing integer, you don't know what ID your database will give you before you insert a given row, unless you create some gnarly SQL that hits the database a minimum of twice.
Many people have weighed in on whether to use UUIDs or auto-incrementing integers for database keys. People argue about the physical alignment of rows, debugging and friendly URLs, stable IDs vs deterministic IDs, non-uniqueness across tables, inadvertent data disclosure...lots of reasons to use one or the other.
The bottom line? It doesn't really matter. What matters is that you have sensible, non-religious reasons for your choice.
Both UUIDs and serial integers have their place, depending on the context. If you have a lookup table that users will never see, use serial IDs; who cares? If you use an ORM extensively, you might prefer UUIDs.
If you're new to programming, all of this seems like angels on the head of a pin. So read up on it, listen to the arguments on both sides, and then decide what works to solve your problem. Which is basically what you should do all the time as a professional programmer.