About a year and a half ago, Bob Wyman was instrumental in defining an approach to greatly reduce the load and bandwidth used by applications that polled for changes to RSS/Atom feeds.
The other week, he noted that Microsoft will support the RFC3229+feed approach as well - which is good.
The only problem I have with this approach is that I think it is simpler to use hyperlinks, and I haven't seen a real comparison between the two. Both approaches have the client application maintain state of what data was last retrieved, but using hyperlinks has more chances for pre-existing caching servers to work without modification. I think the Atom protocol has defined something like this, but I couldn't follow the email threads.
To use hyperlinks, the data returned in a feed would have a link to the 'next' (more recently changed) posts. The client would then follow that link, which would either be empty (and optionally have a cache-control header to indicate how long to wait before checking again) or have more data - along with another 'next' link. The client just keeps following the links. The client would have two URIs - the original, well-known location that new readers start from, and the changing one which is the set of data most recently retrieved by that particular client. The server decides what the 'next' link is and what it contains - the data would be very cachable across all clients, merely by checking the URI.
The downside of this approach is the need to put the link within the content of the response - or add a response header for that location if the content isn't easily extended.