> Hi all,
> I'm going to be hacking carrier sense in to the FPGA on the USRP2 very
> soon. Basically, taking what I did with the "in-band" project from the
> USRP1 with carrier sense, and moving it forward to USRP2.
> The idea is, just like you can set a timestamp to "gate" a packet on its
> way out: only transmit it at time X, you can do something similar with
> carrier sense. If the burst has the carrier sense flag set, then you
> wait for the carrier to become idle, then transmit the packet.
> For the in-band implementation, I had a command to set the value at
> which the carrier is determined to be busy/idle. This was stored in
> memory in the FPGA. Then, when bursts came with a carrier sense bit
> set, that value is used.
> OK: so I'd like to re-do this implementation to keep this kind of stuff
> alive, and I will use it myself. But in doing so, I'd like to implement
> it in a way that jives well with the higher ups. If I'm going to do it,
> I'd like to do it right so that it lives on through future versions of
> GNU Radio. So that means doing it using UHD.
> For the life of me, I can't find the UHD header spec. But I imagine
> somewhere in there we can fit a bit to gate based on carrier sense, and
> a new command to set the carrier sense threshold.
> If you have any advice/guidance on how you'd like to see this
> implemented, let me know. I sincerely would like this to live long and
> prosper in GNU Radio and the USRPs.
I totally like and support your idea and would love to help realizing
it. Using the timestamp logic inside UHD as a reference is a great idea
that also came to my mind a while ago.
There are a few things from the architecture point of view though that
need to be discussed. Let's take a CSMA MAC as an example, I guess that
goes into the right direction :-) Just having the "if channel free,
transmit packet"-logic inside the FPGA wouldn't make much sense in a
multi-user environment. What would happen is that if the channel is busy
and multiple nodes have packets in their tx queues, they would all end
up sending their packets more or less at the same time after the channel
gets idle again (assuming all nodes are in sensing range). In a
practical system, one would now start to move parts of the CSMA state
machine, i.e the random backoff, into the FPGA. Trying to control this
via UHD is probably a bad idea as UHD's main business is transportation.
I do think we need something like what you have suggested but I am still
a bit puzzled about the right way of implementing it.