# Grassmannians 1

Let $V := \mathbb{C}^{n}$, or in general finite dimensional $\mathbb{C}$-vector space. Then we define $G(k, V)$ (or $G(k, n)$) as the set of all $k$-dimensional subspaces (which are referred to as “$k$-planes”) of $V$. (Of course, we assume $0 \leq k \leq n$.)

Any $k$-plane $P$ may be represented by an $k \times n$ matrix of rank $k$. That is, the rows of the matrix are basis elements of the $k$-plane $P$. Denote $M_{k}(k, n)$ the set of all $k \times n$ matrices with rank $k$. Then we may identify

$G(k, n) = M_{k}(k, n)/GL(k)$, the orbit of group action by $GL(k)$.

Exercise 1.1. Justify the above identification by showing the following statement. Any $k \times n$ matrices $A$ and $B$ represent same $k$-planes if and only if there exists $g \in GL(k)$ such that $B = gA$.

(Hint: Show that $A, B$ represent the same $k$-plane if and only if there is $k \times k$ matrix $C$ such that $A_{i} = B_{i}C$ for each $i$, where $A_{i}, B_{i}$ denote the rows of $A, B$, respectively. Let $A^{i}$ and $B^{i}$ denote the $i$th columns of $A$ and $B$ respectively. Take the transpose to get $A^{i} = C^{t}B^{i}$. Thus $A, B$ represent the same $k$-planes if and only if there are $k \times k$ matrices $g, h$ such that $B^{i} = gA^{i}$ and $A^{i} = hB^{i}$. Show that $gh = I$ so that $g, h \in GL(k)$.)

Exercise 1.2. Show that $\dim GL(k, n) = k(n-k)$.

(Hint: From the hint given for Exercise 1.1., show that for any $k \times n$ matrices $A, B$, the row spaces of $A, B$ coincide if and only if the column spaces of them coincide. Then for any rank $k$ matrix $A$, we may assume that $A$ is given by augmenting an $k \times (n - k)$ matrix on the right to a $k \times k$ invertible matrix (i.e., an element of $GL(k)$). Argue that the entries of $k \times (n - k)$ decide the dimension of the whole grassmannian.)

Recall that we can projectivize a vector space as follows:

$\mathbb{P}(\mathbb{C}^{n+1}) := \mathbb{P}^{n} = \{\text{lines in }\mathbb{C}^{n+1}\} = G(1, n+1)$.

If $V \subseteq W$ then clearly $\mathbb{P}(V) \subseteq \mathbb{P}(W)$. We have the identification $V \leftrightarrow \mathbb{P}(V)$.

Example. We have

$G(2, 4) = \{V \subseteq \mathbb{C}^{4} : V \simeq \mathbb{C}^{2}\} \simeq \{\mathbb{P}(V) \subseteq \mathbb{P}^{3} : \mathbb{P}(V) \simeq \mathbb{P}^{1}\}$.

There is another way to describe $G(2, 4)$. We review the relevant linear algebra first.

Recall that $\wedge^{k}(V)$ is the set of alternating $k$-multilinear functions $f : V^{k} \rightarrow \mathbb{C}$. Given $f \in \wedge^{k}(V)$ and $g \in \wedge^{l}(V)$, we define

$f \wedge g := A(f \otimes g)/(k!l!) \in \wedge^{k+l}(V)$.

That is, we have

$(f \wedge g)(v_{1}, \cdots, v_{k +l}) = \sum_{\sigma \in S_{k+l}}f(v_{\sigma(1)}, \cdots, v_{\sigma(k)})g(v_{\sigma(k+1)}, \cdots, v_{\sigma(k+l)})/(k!l!)$.

Exercise 1.3. Show that $f \wedge g = (-1)^{kl}(g \wedge f)$.

Exercise 1.4. Let $e_{1}^{*}, \cdots, e_{n}^{*} \in \mathbb{C}^{n}$ be the standard dual basis. Given $I = (1 \leq i_{1} < \cdots < i_{k} \leq n)$ and $J = (1 \leq j_{1} < \cdots < j_{k} \leq n)$, show that $e_{I}^{*}(e_{J}) = \delta_{IJ}$, which is $1$ when $I = J$ and $0$ otherwise. Conclude that the set $\{e_{i_{1}} \wedge \cdots \wedge e_{i_{k}} : 1 \leq i_{1} < \cdots < i_{k} \leq n\}$ form a basis for $\wedge^{k}\mathbb{C}^{n}$. (In particular, we have $\dim \wedge^{k}\mathbb{C}^{n} = {n \choose k}$, and of course we are assuming $0 \leq k \leq n$. If $k > n$, exterior $k$-power is just zero.)

Convention. Given $u = a_{1}e_{1} + \cdots + a_{n}e_{n} \in \mathbb{C}^{n}$, we have a unique dual image $u^{*} = a_{1}e_{1}^{*} + \cdots + a_{n}e_{n}^{*} \in V^{*}$. Thus, if $u, v \in \mathbb{C}^{n}$, then we write $u \wedge v := u^{*} \wedge v^{*}$.

Exercise 1.5. For the standard basis $e_{1}, \cdots, e_{n} \in \mathbb{C}^{n}$, show that $(e_{1} \wedge \cdots \wedge e_{n})(v_{1}, \cdots, v_{n}) = \det[e_{i}^{*}(v_{j})]$.

Exercise 1.6. Given the bases $\{u_{1}, \cdots, u_{n}\}, \{v_{1}, \cdots, v_{n}\} \subseteq \mathbb{C}^{n}$, take the matrix $A = (a_{ij})$ given by $v_{i} = \sum_{j=1}^{n}a_{ij}w_{j}$. In other words, in basis $\{w_{j}\}_{j=1}^{n}$ the $i$th row of $A$ are $w_{i}$. Show that

$v_{1} \wedge \cdots \wedge v_{n} = \det(A) w_{1} \wedge \cdots \wedge w_{n}$.

Thus if $A = (a_{ij})$ is an $n \times n$ matrix, then

$(a_{11}, \cdots, a_{1n}) \wedge \cdots \wedge (a_{n1}, \cdots, a_{nn}) = \det(A)$.

Now we describe $G(2, 4)$ in a different way. Fix any $V \in G(2, 4) = G(2, \mathbb{C}^{4})$. Consider any basis $u, v \in V$. Then we have a map $(u, v) \mapsto u \wedge v \in \wedge^{2}\mathbb{C}^{4}$. If we write $u = a_{11}e_{1} + a_{12}e_{2}$ and $v = a_{21}e_{1} + a_{22}e_{2}$, then $A = (a_{ij})$ is the $2 \times 2$ matrix whose rows are $u, v$. Writing $\det(u, v) := \det A$, we have $(u \wedge v)/\det(u, v) = e_{1}^{*} \wedge e_{2}^{*}$. Thus for any other basis $u', v' \in \mathbb{C}^{2}$, we have

$u' \wedge v' = (\det(u', v')/\det(u, v))(u \wedge v)$.

Now define $V \mapsto \wedge^{2}V = \mathbb{C}(u \wedge v) \subseteq \wedge^{2}\mathbb{C}^{4} \simeq \mathbb{C}^{6}$ by choosing a basis $u, v \in V$ since $\wedge^{2}V$ is not dependent on the choice of basis of $V$.

More concretely, if we write $\mathbb{C}^{4} = \mathbb{C}e_{1} \oplus \mathbb{C}e_{2} \oplus \mathbb{C}e_{3} \oplus \mathbb{C}e_{4}$. Then $\wedge^{2}\mathbb{C}^{4} = \mathbb{C}e_{12} \oplus \cdots \oplus \mathbb{C}e_{34}$, where $e_{ij} = e_{i} \wedge e_{j} = - e_{j} \wedge e_{i} = -e_{ji}$. This means that we can recognize $\wedge^{2}\mathbb{C}^{2}$ as the space of $4 \times 4$ skew-symmetric matrices:

$\begin{pmatrix}x_{12}e_{12} + x_{13}e_{13} + x_{14}e_{14} \\ + x_{23}e_{23} + x_{24}e_{24} \\ + x_{34}e_{34}\end{pmatrix} \leftrightarrow \begin{bmatrix} 0 & x_{12} & x_{13} & x_{14} \\ -x_{12} & 0 & x_{23} & x_{24} \\ -x_{13} & -x_{23} & 0 & x_{34} \\ -x_{14} & -x_{24} & -x_{34} & 0 \end{bmatrix}$.

Since $\dim \wedge^{2}V = {2 \choose 2} = 1$, we have

$\wedge^{2} : G(2,4) \rightarrow G(1, \wedge^{2} \mathbb{C}^{4}) \simeq G(1, \mathbb{C}^{6}) \simeq G(1 : \mathbb{P}^{5})$.

The following exercise gives an intuitive picture of $\wedge$.

Exercise 1.7. Given $v = (v_{1}, v_{2}, v_{3}) \in \mathbb{C}^{3}$, define $\tilde{v} := v_{1}e_{23} + v_{2}e_{31} + v_{3}e_{12}$. Given $v, w \in \mathbb{C}^{3}$, show $\widetilde{v \times w} = \tilde{v} \wedge \tilde{w}$.

Exercise 1.8. Given an $n \times n$ skew symmetric matrix $A$, if $n$ is odd, then $\det(A) = 0$.

Exercise 1.9. (Hard) Given an $n \times n$ skew symmetric matrix $A$, if $n$ is even and $A = (x_{ij})$, then there is a polynomial $p(x_{ij})$ such that $\det(A) = p(x_{ij})^{2}$. This polynomial is called the Pfaffian of $A$.

Exercise 1.10. Explain how to define the rank of elements in $\wedge^{k}\mathbb{C}^{n}$ (Hint: use $n \choose k$ skew symmetric matrix).

# A comparison between change of variables and substitution

If $\varphi:[a,b] \rightarrow \mathbb{R}$ is a $C^{1}$ function (i.e., derivative exists and continuous), then for any continuous $f:\varphi([a, b]) \rightarrow \mathbb{R}$, we have

$\displaystyle\int_{a}^{b}f(\varphi(x))\varphi'(x)dx = \int_{\varphi(a)}^{\varphi(b)}f(t)dt$.

Now consider a $C^{1}$ function $\phi : U \rightarrow \mathbb{R}^{n}$ where $U \subseteq \mathbb{R}^{n}$ is open. (Here, $C^{1}$ means that partial derivatives are continuous.) Suppose that $\phi$ is injective and $f$ is a real-valued function compactly supported in $\phi(U)$. Then

$\displaystyle\int_{U}f(\phi(x))|\det\phi'(x)|dx = \int_{\phi(U)}f(t)dt$.

The question is: where is the absolute value in the single variable case?

Notice that for the multivariable case, we only consider the case when $\phi$ is 1-1. If we assume $\varphi$ 1-1 in the single variable case, then it is either $\varphi' > 0$ or $\varphi' < 0$. If $\varphi' > 0$, then two cases evidently coincide, so assume that $\varphi' < 0$. Then

$\displaystyle\int_{a}^{b}f(\varphi(x))|\varphi'(x)|dx = -\int_{a}^{b}f(\varphi(x))\varphi'(x)dx = \int_{\varphi(b)}^{\varphi(a)}f(t)dt.$

But notice hat $\varphi([a, b]) = [\varphi(b), \varphi(a)]$, since $\varphi$ decreases! Thus the two cases match.

# A good way to write a plane

Suppose that a plane contains two nonparallel lines with direction vectors v and w. If p is a point passing through the plane, then the plane equation is given by det(x – p, v, w) = 0.

# Connected component is closed

Any connected component is closed because it is the maximal connected set containing an element fixed but then its closure is a surrounding connected set.

# From classical geometry to algebraic geometry

This post is an elementary discussion of understanding points in $k^{n}$ as points in $\mathbb{A}_{k}^{n} = \text{Spec}k[x_{1}, \cdots, x_{n}]$, where $k$ is a (not necessarily algebraically closed) field. The reference is Ravi Vakil’s lecture notes, though the post is not identical to the notes.

Theorem. Given a point $(a_{1}, \cdots, a_{n}) \in k^{n}$, the kernel of the evaluation homomorphism (at the point) $k[x_{1}, \cdots, x_{n}] \rightarrow k$ given by $f \mapsto f(a_{1}, \cdots, a_{n})$ is $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$. In particular, the ideal $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ is maximal, so $(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \in \text{Spec}k[x_{1}, \cdots, x_{n}]$.

Remark. It is rather hard to compute the kernel in a naive way. Let’s try it. Suppose that $f(a_{1}, \cdots, a_{n}) = 0$. To show that $f \in (x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ directly, we need to find $g_{1}, \cdots, g_{n} \in k[x_{1}, \cdots, x_{n}]$ such that

$f(x_{1}, \cdots, x_{n}) = g_{1}(x_{1}, \cdots, x_{n})(x_{1} - a_{1}) + \cdots + g_{n}(x_{1}, \cdots, x_{n})(x_{n} - a_{n})$.

But there is a way to get around this difficulty: to get help from morphisms!

Proof. Define $\phi : k[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \rightarrow k$ by $\bar{f} \mapsto f(a_{1}, \cdots, a_{n})$. This is well-defined because if $\bar{f} = 0$, then $f(x_{1}, \cdots, x_{n}) = \sum_{j=1}^{n}h_{j}(x_{1}, \cdots, x_{n})(x_{j} - a_{j})$ so that $f(a_{1}, \cdots, a_{n}) = 0$.

Now, define $\psi : k \rightarrow k[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ by $c \mapsto \bar{c}$. Then

$\psi\phi(\bar{f}) = \overline{f(a_{1}, \cdots, a_{n})} = f(\bar{a_{1}}, \cdots, \bar{a_{n}}) = f(\bar{x_{1}}, \cdots, \bar{x_{n}}) = \bar{f}$,

and $\phi\psi(c) = \phi(\bar{c}) = c$, so $\phi$ is an isomorphism. Let $K$ be the kernel of evaluation. Then we have constructed the following isomorphism

$k[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \overset{\sim}{\longrightarrow} k[x_{1}, \cdots, x_{n}]/K$

given by $f \mod (x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \mapsto f \mod K$. Since zeros of each side correspond, we have $(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) = K$. Q.E.D.

Remark. The key of the above proof was to recognize that

$\overline{f(x_{1}, \cdots, x_{n})} = f(\bar{x_{1}}, \cdots, \bar{x_{n}})$

in the quotient ring $k[x_{1}, \cdots, x_{n}]/I$ where $I$ is a given ideal of the polynomial ring. Let $\pi : k[x_{1}, \cdots, x_{n}] \rightarrow k[x_{1}, \cdots, x_{n}]/I$. The above identity can be also understood as

$\pi(k)[\bar{x_{1}}, \cdots, \bar{x_{n}}] = k[x_{1}, \cdots, x_{n}]/I$,

so it is acceptable to write

$\bar{a_{n_{j}}}\bar{x_{j}}^{n} + \cdots + \bar{a_{1}}\bar{x_{j}} + \bar{a_{0}} = a_{n_{j}}\bar{x_{j}}^{n} + \cdots + a_{1}\bar{x_{j}} + a_{0}$,

which is given by the restriction of scalars given by $\pi : k \overset{\sim}{\longrightarrow} \pi(k)$. Notice that our arguments do not need $k$ to be isomorphic to $\pi(k)$! That’s why I believe that this can be generalized as much as we want (see below).

There is another reason that it is legitimate to write $a \bar{x_{j}} = \bar{a}\bar{x_{j}}$: if we look at the quotient $k[x_{1}, \cdots, x_{n}]/I$ as a $k$-algebra, this is how one defines $k$-action on the quotient module: if $N$ is an $A$-submodule of $M$, we define $a\bar{m} = \overline{am}$Both views are valid here!

Question. Can we generalize this to $A[x_{1}, \cdots, x_{n}]$, where $A$ is an arbitrary commutative ring with unity?

Answer (not professional). I think we can, but it is more reasonable to assume that $A$ is an integral domain so that we can conclude that $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ are prime ideals given by the isomorphism $A[x_{1}, \cdots, x_{n}]/(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \simeq A$ similar to above. However, I think that the isomorphism is given over any commutative rings.

Consider an example: $\mathbb{Z}[x, y] \rightarrow \mathbb{Z}$ given by $f \mapsto f(m, n)$. Then we can construct

$\phi : \mathbb{Z}[x, y]/(x - m, y - n) \rightarrow \mathbb{Z}$

by $\bar{f} \mapsto f(m, n)$ and

$\psi : \mathbb{Z} \rightarrow \mathbb{Z}[x, y]/(x - m, y - n)$

by $s \mapsto \bar{s}$. Then we have

$\psi\phi(f(x, y)) = \phi(f(m, n)) = \overline{f(m, n)} = \overline{f(x, y)}$ and

$\phi\psi(s) = \phi(\bar{s}) = s$. This clearly generalizes to the case where $A$ is any commutative ring and we have arbitrarily many finite number of indeterminates. Again, the key is to see $\overline{f(x, y)} = \overline{f(m, n)}$. As we saw before, this is merely because polynomials are “constructed by addition and multiplication”. Just like in Theorem, this argument is not about merely giving an isomorphism, but it also shows that

$f(a_{1}, \cdots, a_{n}) = 0$ if and only if $f(x_{1}, \cdots, x_{n}) \in (x_{1} - a_{1}, \cdots, x_{n} - a_{n})$.

To put it differently, we have

$f(a_{1}, \cdots, a_{n}) = 0$ if and only if $f(x_{1}, \cdots, x_{n}) = 0 \mod (x_{1} - a_{1}, \cdots, x_{n} - a_{n})$.

A more ambitious question. If I have not made any incorrect remark, it is natural to ask what is the notion of ideals $(x_{1} - a_{1}, \cdots, x_{n} - a_{n})$ that is the kernel of $f \mapsto f(a_{1}, \cdots, a_{n})$. If the ground ring is not an integral domain, we necessarily have $(x_{1} - a_{1}, \cdots, x_{n} - a_{n}) \notin \mathbb{A}_{A}^{n}$, so we might want to make $\mathbb{A}_{A}^{n}$ bigger! Let us call such an ideal as a type C ideal. Now we may ask what are the type C ideals in an arbitrary ring and if it enriches the theory! However, this might be just a very shallow question.

Remark. Notice that we have not used Nullstellensatz (weak nor strong) for this discussion. Weak Nullstellensatz is only used to say that over an algebraically closed field, classical points are all the maximal ideals of a polynomial ring over the field.

# Review: Topologification 1

This post is to review some topological concepts. The material is from Topology (3rd edition) by James Munkres and several articles of proofwiki (e.g., the article about a way to create a basis) with some additional thoughts of myself.

Given a set $X$, we are interested in ways to construct a topological structure on $X$.

One natural thought is to pick some collection of subsets $\mathscr{S} \subseteq \mathcal{P}(X)$ and ask if there is the smallest topology on $X$ that contains $X$. If it exists, it must be unique, and let us denote $O_{\mathscr{S}}$ the topology, calling it the topology generated by $\mathscr{S}$.

Existence of the topology generated by a set. We can construct this topology by taking intersection containing $\mathscr{S}$ because there is at least one being intersected, which is the entire power set $\mathcal{P}(X)$.

Remark. Let $\mathscr{T} \subseteq \mathcal{P}(X)$ be a topology on $X$. Then $O_{\mathscr{T}} = \mathscr{T}$. Moreover, we get a functor $\mathscr{S} \mapsto O_{\mathscr{S}}$ from the category of topologies on $X$ to the category of subsets of $X$ (i.e., $\mathcal{P}(X)$).

Remark. Given a collection of subsets $\mathscr{S} \subseteq \mathcal{P}(X)$, the topology $O_{\mathscr{S}}$ is the topological best approximation of $\mathscr{S}$. It deserves a name such as “topologification” and I am pretty sure this is the left-adjoint to the forgetful functor from the category of topologies on $X$ to the powerset $\mathcal{P}(X)$.

We now want to another (less categorical but more set-theoretical) criterion to compute $O_{\mathscr{S}}$. An easier case is given by the concept of “basis”.

The subcollection $\mathscr{B} \subseteq \mathcal{P}(X)$ is called a (topological) basis if it satisfies the following axioms:

1. $\mathscr{B}$ covers $X$.
2. If $x \in B_{1} \cap B_{2}$ where $B_{1}, B_{2} \in \mathscr{B}$, then there is $B_{3} \in \mathscr{B}$ such that $x \in B_{3} \subseteq B_{1} \cap B_{2}$.

We can compute $O_{\mathscr{B}}$ as follows.

Theorem 1.1. Let $\mathscr{B}$ be a topological basis in $\mathcal{P}(X)$. Then

$O_{\mathscr{B}} = \{U \in \mathcal{P}(X) | (\forall x \in U) (\exists B \in \mathscr{B}) (x \in B \subseteq U)\}$.

Remark. Let $O$ denote the right-hand side. It is easy to show that $O$ is a topology on $X$. However, it is hard (at least for me) to show the equality!

We show that $O$ is a topology on $X$. (i) We get $\emptyset \in O$ vacuously and $X \in O$ trivially. Let $\{U_{i}\}_{i \in I} \subseteq O$. (ii) If $x \in \cup_{i \in I}U_{i}$, then $x \in B \subseteq U_{j} \subseteq \cup_{i\in I}U_{i}$ for some $j \in I$ and some $B \in \mathscr{B}$, so $\cup_{i \in I}U_{i} \in O$. (iii) If $x \in U_{i} \cap U_{j}$, then there is $B_{i}, B_{j} \in \mathscr{B}$ such that $x \in B_{i} \subseteq U_{i}$ and $x \in B_{j} \subseteq U_{j}$, so take $B_{k} \in \mathscr{B}$ such that $x \in B_{k} \subseteq B_{i} \cap B_{j} \subseteq U_{i} \cap U_{j}$. Thus $U_{i} \cap U_{j} \in O$.

We have established that $O$ forms a topology on $X$ and clearly, we have $\mathscr{B} \subseteq O$. The difficulty arises when one tries to show $O \subseteq O_{\mathscr{B}}$. To finish showing the theorem, we need to discuss about another way to compute $O_{\mathscr{B}}$ by another description of $O$.

Theorem 1.2. Let $\mathscr{B}$ be a topological basis in $\mathcal{P}(X)$. Then

$O_{\mathscr{B}} = \{U \in \mathcal{P}(X) | U \text{ is a union of some members of }\mathscr{B}\}$.

Proof. Denote $O'$ the right-hand side. It is evident that $O' \subseteq O_{\mathcal{B}}$. If we show that $O' = O$ (see the remark above), then we can conclude at once that $O_{\mathscr{B}} = O' = O$ since $O$ is a topology on $X$ containing $\mathscr{B}$!

Let $U \in O$. For each $x \in U$, (using the axiom of choice) choose $B_{x} \in \mathscr{B}$ such that $x \in B_{x} \subseteq U$. Then $U = \cup_{x \in U}B_{x} \in O'$. It is immediate that $O' \subseteq O$, so we finish the proof (of Theorem 1 and 2 together). Q.E.D.

Remark. Do not forget that the description given in Theorem 1 was not so trivial to prove! As a corollary, we get a nontrivial way to describe open sets of a topological space $(X, O(X))$ by $O_{O(X)} = O(X)$.

Corollary 1.3. Let $X$ be a topological space. Then $U \subseteq X$ is open if and only if for each $x \in U$ there is an open $V \ni x$ such that $x \in V \subseteq U$.

# Subspace and quotient space

A space with an inclusion morphism is a subspace. A space with a projection morphism (mapping each element to its equivalence class) is a quotient space.