(Realistically, this post assumes familiarity with derived functors, chain complexes, and their homology. Ideally, the reader has played around with and a bit, as well as a few examples such as singular cohomology, group homology, etc. A lot of this material is taken from Weibel’s Introduction to Homological Algebra.)
I’ve been putting a lot of energy into understanding homological algebra recently (following Weibel’s book). And if there’s one thing you do all the time in homological algebra, it’s resolve things (a resolution of a module by X objects, where X is some adjective, is an exact sequence or with each an X object). Resolutions help you to compute derived functors (e.g. the cohomology of something), which is a common goal. So I want to talk about how you can compute the derived functors by resolving either or , and why we should care.
This may or not really count as part of the “Shoulda Series” (1), since I’m pretty sure someone did tell me this at some point. But either way, I had to re-discover it for myself “in practice” before I “got it.”
Throughout, let be a ring with unit (not necessarily commutative, but you can take it to be if you want) and let and similar symbols be -modules. We will use (resp., ) for projective (resp., injective) -modules. (Take all modules to be, say, right.) The first thing to recall is that for fixed and , the functors and are right exact, while the functors and are left exact. The first three are functors , but the fourth is a functor ; this important point will become central in just a bit, so make sure you understand why! (Basically, a contravariant functor is the same as a covariant functor . By , the opposite cateogry of , we simply mean the category obtained from by keeping the same objects and reversing all the arrows.)
We may now define the and functors:
Definition: Define:
1. to be the left derived functors of ,
2. to be the left derived functors of ,
3. to be the right derived functors of , and
4. to be the right derived functors of .
The good news is:
Theorem: There are natural isomorphisms and .
We denote the common values and . The basic idea of the proof of the half of this theorem is to take projective resolutions of and of , take the tensor product bicomplex formed by these two resolutions, and then show that a certain chain complex is acyclic. (This chain complex is closely related to , the total direct sum complex associated to the bicomplex .) One then shows that is naturally isomorphic to each of the two derived functors. For the proof is similar, using injective resolutions, , and instead. See Weibel, section 2.7 for the details.
In many situations, our goal is to compute (or at least gain knowledge about) and . Recall that to compute left derived functors we resolve by projective objects, and to compute right derived functors we resolve by injective objects. Projective objects in the category are great: a module is projective if and only if it is a direct summand of a free module. In particular, all free modules are projective. In practice, one can often use finite-rank free resolutions, which are comparatively easy to compute with (and can lead to finiteness results on the derived functors, automatically). One great example of this is the bar resolution (this is the chain complex described here), whose existence immediately tells you that the group homology of a finite group has finite rank whenever the representation does.
But injective objects are not as nice to work with. The only decent general fact I am aware of is the following.
Baer’s Criterion: Let be an -module. Then is injective if and only if for every ideal of and every module homomorphism , there is a homomorphism extending .
This isn’t bad, but it isn’t nearly as helpful as in the projective case. And in fact, most injective modules turn out to be huge and/or nasty in some sense. So it appears that in general, will be harder to compute than . This is a real shame, since usually has more interesting structure! (Think of as cohomology, where there is usually an interesting product, e.g., cup product on the singular cohomology of topological spaces.)
But all is not lost: Remember that the functor is contravariant; equivalently, it is a (covariant) functor . Since the universal property defining projective objects is dual to the universal property defining injective objects, it follows that the injectives of are precisely the projectives of ! So when computing , we can either resolve by injective -modules (usually messy and/or difficult) or resolve by projective -modules (usually much nicer). For instance, the bar resolution mentioned earlier, which is a resolution of the trivial -module by free -modules for a group , can be used to compute group cohomology, i.e., the groups . Hence as long as we are content to always resolve the first variable, is just as easy to compute in general as .
Finally, I want to discuss something which puzzles me. The tensor product functor is left adjoint to the functor; that is, we an isomorphism
valid whenever is a right -module, is an -bimodule, and is a right -module; this isomorphism is natural in all three modules. And one can show that this adjunction holds for the corresponding derived functors as well. So there is a very fundamental symmetry between the bifunctors and . Simplifying to the case where is commutative, we have
In this most important of adjunctions, why is there an opposite-category variable in one bifunctor but not in the other? Life would seem to make more sense if each of the two had one ordinary- and one opposite-category variable. I suspect that this may have to do with the fact that things are not as symmetric as they seem: even if is commutative so that left and right are equivalent, we are still talking about algebras (rings) and modules, while dually we could also talk about coalgebras and comodules. See the questions below, and enlighten me, please.
Questions
Here are a few questions which are bothering me, mostly related to the above. Comments, suggestions, examples, problems, etc. are more than welcome!
1. Philosphically/fuzzily/whateverly, why is there this weird asymmetry between / and /? Maybe the only answer is that “there happens to be an interesting adjunction of bifunctors where one side is covariant-covariant and the other side is contravariant-covariant.” But this is really unsatisfying.
2. Is the answer to question 1 related to the fact that we are talking about algebras and modules, rather than coalgebras and comodules? If this is the case, then what do these bifunctors and adjunctions look like in the case of bialgebras?
3. Cohomology/ has an interesting product structure. Does homology/ have a coproduct structure? If so, when is it interesting?
4. Less related, but recently bothering me: Does anyone know of an example of a non-commutative ring which is Morita equivalent to its opposite?
[Background for 4: We say rings and are Morita equivalent if the categories are and of left modules are isomorphic. So in this case, I am asking for a ring whose left and right modules agree in some reasonable natural way, but which is not commutative.]