uFuture
A future is a placeholder returned immediately from an asynchronous call; the result is filled in later, and the caller blocks only if it tries to read before the value arrives. uC++ provides Future_ISM<T> (implicit storage management — GC’d) and Future_ESM<T> (explicit — client allocates/deallocates).
Why futures instead of start/wait or callbacks?
Because the protocol is implicit. Start/wait needs two calls with a matching ticket (forgeable, easy to misuse). Callbacks invert control flow (server calls client) and the routine can’t block. A future is just a value — the caller writes
i = f + 3;and blocks transparently iffisn’t ready yet. No protocol to get wrong.
Client-side usage
#include <uFuture.h>
Server server;
Future_ISM<int> f[10];
for ( int i = 0; i < 10; i += 1 ) {
f[i] = server.perform( i ); // async; returns immediately
}
// do other work while server processes
for ( int i = 0; i < 10; i += 1 ) {
osacquire( cout ) << f[i]() << endl; // block if not ready; () = read-only copy
}API
future()— read-only copy of the value; block if empty, raise exception if server raised one.available()— true if the async call completed (result, exception, or cancelled).reset()— mark empty so future can be reused.cancel()— cancel the pending async call; waiting clients getuCancelled.cancelled()— true if cancel succeeded.
Server-side delivery
_Task Server {
struct Work {
int i;
Future_ISM<int> result;
Work( int i ) : i( i ) {}
};
public:
Future_ISM<int> perform( int i ) { // called by clients
Work *w = new Work( i );
requests.push_back( w );
return w->result; // return future immediately
}
// worker does:
Work *w = requests.front(); requests.pop_front();
int r = /* compute */;
w->result.delivery( r ); // unblocks waiting clients
delete w; // client future NOT deleted (ref counting)
};delivery( T )— deposit value, unblock waiters.delivery( uBaseEvent * )— deposit exception; rethrown at each waiter. Exception must be dynamically allocated.
ISM vs ESM
| Future_ISM | Future_ESM | |
|---|---|---|
| Storage | Ref-counted, auto-freed | Client new/delete |
| Simpler | âś” | |
| More efficient | âś” |
How Future_ISM is implemented (Buhr §9.11.2)
Future_ISM<T> is a thin handle wrapping a pointer to a heap-allocated future state:
template<typename T>
class Future_ISM {
struct State {
T value;
bool available;
uBaseEvent * exn; // pending exception, if any
uSemaphore ready; // waiters block here
std::atomic<int> refs; // reference count
};
State * s; // handle points at shared state
public:
Future_ISM() { s = new State{}; s->refs = 1; }
Future_ISM( const Future_ISM & o ) { s = o.s; s->refs += 1; }
~Future_ISM() { if ( --s->refs == 0 ) delete s; }
T operator()(); // block on s->ready, then return s->value
void delivery( T v ); // set s->value; s->ready.V() all waiters
};- the server’s
Work *wholds one reference (the one returned inperform) - each client copy (
f[i] = server.perform(i)) bumps the count - when the server
deletes itsWorkstruct, its reference drops; when the last client drops its handle,refshits zero and the State is freed - this is why the comment
delete w; // client future NOT deletedabove works: deletingWorkonly drops the server’s single reference, client handles still point at the live State
Future_ESM<T> skips all of this: the future object is the state, and lifetimes are the client’s problem. Cheaper (no heap allocation beyond the client’s own, no atomic refcount) but one use-after-free bug and the server writes into freed memory.
Select statement
_Select( selector-expression ); waits on heterogeneous futures by logical selection criteria — f1 && f2, f1 || f2, etc.