1. Topologies

- 2x2,
- 4x4,
- 2x2 clusters,
- with hardnodes,
- larger topology,
- arbitrary

2. Variations in the graph specification

- 'in' edges only, 
- 'out' edges only, 
- both 'in' and 'out' edges, 
- incomplete graph (not all edges),
- inconsistent graph specification,
   (r1:swp1 -> r2:swp1;
    r3:swp2 -> r1:swp1;)
- node_name not there/mismatch,
- changes to .dot file (we need to subscribe... inotify?),
- IP-address based graph,
- MAC-address based graph,

3. Dynamic events

- Link down
- Link up
- 'match' followed by 'no match'
- 'no match' followed by 'match'
- non-existent link followed by link addition
- hostname change
= link IP change
- link MAC change

4. Restarts

PTMD restart,
LLDPD restart,
Quagga restart,
MSTP restart,

5. Scripts

(script is a blackbox to us, so not much to test here?)
(/etc/cumulus/ptm.d/if-topo-pass)
- What happens if the script file changes?
- will folks want to add PTM knowledge in if-down.d/if-up.d scripts?
  (e.g. query PTM status on 'if up' in /etc/network/if-up.d/... ?)

6. ptmcli

- multiple connections to PTMD
- check all commands we expose
- do we expose 'watch'? then check in the presence of dynamic events.

7. Quagga notification

- OSPFv2 network
- OSPFv3 network
- OSPFv2 and v3 network
- BGP network
- Static routes
- Link up/down and no PTM notification
- No PTM connection (e.g. start Quagga without PTMD, followed by 
    starting PTMD)
- Quagga CLI (PTM check) toggling

8. Cache

- reload with cache
- reload without cache
- 

9. Performance

- reload (delay in bring up routing)
   fully populated hardware

10. Scale

- #interface (128 ?)
- #nodes in the graph
- #clients to PTMD
- VLANs?
