zfs gives you lots of rope with which to make yourself very uncomfortable later.
in the first scenario you're best off creating two mirror vdevs (zpool create -n foo mirror 2tb1 2tb2 mirror 1tb1 1tb2); this will give you ~ 3TB with reasonable replication, and will perform fine for most workloads. zpool version 28 is pretty good at dealing with differently-sized and even differently-performing top level vdevs, and it is a good idea to create top level vdevs using effectively-identical physical devices.
you can, however, arrange practically any sort of pool you want, although you are likely to lose out on replication (a series of i/o errors in a single disk could make your pool go offline, a single failed disk may make your pool effectively unrecoverable).
that said, in order to treat the two 1tb physical drives as a 2tb virtual drive for zfs purposes, you would have to use an unrelated disk management tool and juggle with GPT slices, and give those slices to an appropriate zpool create command line. *nobody sensible* would ever offer to support such a setup, as the software interactions will be very complicated, especially in the event of an underlying hardware fault.
you can certainly use a non-host-based hardware raid system to concatenate the smaller drives; do you have one already sitting idle? if not, don't spend on that, 2tb disks are cheap and straightforward, just get another one of those if you want to go the "zpool create -n foo raidz1 2tb1 2tb2 2tb3" route.
in the second scenario, again, you are adding a lot of complexity that will risk your data being unavailable (perhaps permanently). if you are certain you understand the risks and tradeoffs, your better bet is to hardware raid the 2 2tb drives together, and "zpool create -n foo mirror real4 fake4". that is better than e.g. "zpool create -n foo mirror real2a real2b; zpool add -n foo real4" -- if you strip out the "-n"s, you will get a big warning about a replication level mismatch from the zpool add. it is also better than attempting to stack different softare raid/volume management systems in front of zfs.
the only stable approach i've found (and used) is to create a composite disk lvg with corestorage, then create an LV inside that, and give the LV to a zpool create / zpool add command line. the composite disk is just a concatenation, though; except in one special case, writes will go to the first physical volume until that fills, then to the second physical volume. (the special case is if the first PV is a solid state disk and your particular kernel supports fusion drives; not all 10.8.2 or 10.8.3 kernels do).
if you think doing it with corestorage is complicated, well, i personally think that going down that route with appleraid or softraid is even worse. "good luck", and make sure you have lots of backups, and don't be surprised when things break at the worst possible time.
better approaches (in my mind) would be
mirror 2tba 2tbb
raidz 2tba 2tbb 4tb [wasting half the 4tb, you can zpool replace it with a 2tbc at some point, or replace the 2tb drives with larger drives and zpool expand the pool]
You might also want to read:
http://constantin.glez.de/blog/2010/01/ ... still-best