UNIX timestamps are normally stored as 32-bit or 64-bit signed integers, not as floating-point. If you want better than 1-second precision, then the type "struct timespec" (specified by POSIX) gives you nanosecond precision. Fixed-point types can also be used in languages that support them.
Yeah, timespec (or time_interval) is a proper way to go, but it is quite a pain to work with -- you need a library even if you just want to subtract the numbers.
On the other hand, floating-point time is pretty common in scripting languages -- for example Python has time.time(); ruby has Time.now.to_f. It is not perfect, but great for smaller scripts: fool proof (except for the precision loss), roundtrips via any serialization format, and easy to understand. And no timezone problems at all!
UNIX time is seconds since epoch (hence year 2038, that's the limit for a signed 32b time_t).
gettimeofday() and clock_gettime() provide higher resolution timestamps (respectively µs and ns), using typedefs instead of just numbers.
Some APIs return floating-point UNIX time in order to provide sub-second accuracy (the decimal part is the fractional second). Python's time.time() does that for instance.