A typical machine-learning data structure is an "Occurence Matrix", or "Co-Occurrence Matrix". It's especially useful in natural language processing where the occurrences are word counts in documents. Each cell in the occurrence matrix represents the occurence of an event along with another event (e.g. word and document or passage or database record).
If you think about it, this set of connections between events, is really a graph (network of connections), at its most fundamental level. Occurence matrices just take all the nodes of one type (say words) and put array them along the rows, and put the others along the columns (say documents). So in NLP you will often build a table of the number of times words occur in a document. But don't forget that each element/cell of the matrix, located at row i and column j represents the value of a connection (edge) from vertex i to node (vertex) j, an element of the column. If the nodes are the same sorts of things--say just words and the number of times they occur together in the same document or sentence--then the matrix becomes square rather than rectangular.
So if you have an occurrence matrix want to simplify your graph by eliminating the nodes in the row (e.g., words) while retaining only the column nodes (e.g. documents), so you can see how documents connect to each other through their use of the same words. So you are looking at a graph diagram and want to eliminate one type of node: