UBC caches everything it can, and will almost certainly cache more in the absence of a zfs subsystem, since the latter robs memory from it.
Because the UBC is *unified*, any clean pages not locked into physical memory will be cached. That includes things demand-paged (text pages and the like), mmap(2)ed, read(2), and dirty pages which have been committed to secondary storage (i.e., old write(2), metada updates (atime updates notably) and so forth). Active pages are those that have been recently accessed as well as those brought in by warmd, kern.preheat, and so forth. ARC memory is most likely to be accounted for as active, although some may be treated as inactive if all your pools go idle for a substantial period of time.
The count of inactive pages in top(1) and in the Activity Monitory application is *roughly* the count of UBC pages that have not been recently accessed. "Free" will vanish to kern.vm_page_free_min over time (i.e., to nearly zero), given sufficient I/O creating new dirty pages or accessing previously uncached pages.
zstat gives more precision about the ARC than comparable tools do about the UBC. Usually the most dynamic and interesting line items in it will be :
WIRED 88 MiB 1848 MiB/1901 1937 MiB 11.82%
and
200 4096 92061 110519 202580 786920 arc_buf_hdr_t
104 4096 57569 8779 66348 315096 arc_buf_t
The former lists how much of physical memory is being used by the zfs subsystem; arc_buf_t is references to ARC records (which typically will be about 128kibytes), and arc_buf_hdr_t is references to L2ARC records; each of these eats about 256 bytes of physical memory.
You will note that zfs memory will typically be small compared to the total Active+Inactive+Wired memory, especially on a large-physical-memory system.
Bit errors in *any* of that memory may result in reading bad information from the cache, which includes anonymous pages (used for app data structures, mainly), pages of machine code, backing store for the display system, kernel data structures and code, and so forth.
If RAM errors are random, on a system with lots of memory and doing lots of I/O, they will likely hit clean UBC pages, since those typically occupy the largest fraction of physical memory. Those pages may be read again at some point in the future, or may be discarded, depending on system activity.
On a system with so little memory that caches are squeezed, bit errors are more likely to hit in-use pages that likely contain application code (including library code shared among many applications), or data structures, or (if there is lots of writing activity, including atime updates), dirty pages waiting to be committed to storage.
ARC/UBC interaction is tricky to gauge with available instrumentation (although it's amenable to dtrace-ing). Applications will generally use whatever's in the UBC first and will do so without notifying ARC, so the very hottest read-only pages will sit in UBC and may well be evicted from ARC. On the other hand, pages that are frequently dirtied will be kept alive in ARC, since writes are pushed from UBC into the ARC. The busiest blocks in ARC on the zevo port are likely to be metadata from things like atime updates or directory activity, rather than truly hot file-backed mmap pages. That's OK, analogous structures are hot in kernel caches for HFS+ and other virtual filesystems; ZFS is not more exposed to RAM errors than them.
The worst thing one could say about ZFS on a non-ECC-RAM system is that there is an expectation that data on ZFS is "safe" from most errors and that uncaught data integrity problems are therefore more surprising. Unfortunately nothing is "safe" in the presence of actual RAM errors. There is pretty much nothing one can do to reliably avoid being the victim of a random RAM error. Mitigations exist but a general policy of minimizing RAM occupancy and avoiding caching or alternatively doing computationally and/or memory-access-intensive on-line integrity checking are, imho, poor uses of system resources (and on a system with actual RAM errors, each of these tactics may *worsen* data corruption). The only real solution is strongly error checked and corrected RAM and an operating system that is built to report and recover from detected RAM errors.
(Of course even then you are subject to errors in the processors and their caches, software and firmware bugs, ...)