Perhaps I am misunderstanding your point about the entities not trained with intents, but in custom actions I believe you can deal with this (unless you are referring to a purely NLU problem, not Core, in which case I would need an example to understand better). Anyway, if you are talking about Core, then one can specify to extract given entities only from certain intents, but again i confess i may not understand your situation properly. Effectively, you can make intent-specific entities with appropriate intents and actions.
Regarding the animal classifier point, I’m not sure I understand. To state the obvious if you want to classify birds you need birds in the example. I would, perhaps naively, expect that one could make use of low NLU confidence + custom fallback policies to try steer things in the right direction.
To clarify, I am not trying to be dismissive of the complexity of the task, rather I lack a concrete example to help get over my assumption it can be done as is =) That being said, a BoB would be cool and I wonder if something like this could be done just by using a series of NLU and NLU + Core servers in a flock of microservices as you describe. You could have an NLU server simply distinguish the coarse grain stuff which then Core uses to decide which NLU server to use for subsequent inputs (i.e. NLU server for birds, NLU server for dogs, etc)