ipyrad-analysis toolkit: PCA and other dimensionality reduction
Principal component analysis is a dimensionality reduction method used to transform and project data points onto fewer orthogonal axes that can explain the greatest amount of variance in the data. While there are many tools available to implement PCA, the ipyrad tool has many options available specifically to deal with missing data. PCA analyses are very sensitive to missing data. The ipyrad.pca
tool makes it easy to perform PCA on RAD-seq data by filtering and/or imputing missing data,
and allowing for easy subsampling of individuals to include in analyses.
Required software
[1]:
# conda install ipyrad -c bioconda
# conda install scikit-learn -c bioconda
# conda install toyplot -c eaton-lab
[2]:
import ipyrad.analysis as ipa
import pandas as pd
import toyplot
Required input data files
Your input data should be a .snps.hdf
database file produced by ipyrad. If you do not have this you can generate it from any VCF file following the vcf2hdf5 tool tutorial. The database file contains the genotype calls information as well as linkage information that is used for subsampling unlinked SNPs and bootstrap resampling.
[3]:
# the path to your .snps.hdf5 database file
data = "/home/deren/Downloads/ref_pop2.snps.hdf5"
#data = "/home/deren/Downloads/denovo-min50.snps.hdf5"
Input data file and population assignments
Population assignments (imap dictionary) are optional, but can be used in a number of ways by the pca
tool. First, you can filter your data to require at least N coverage in each population. Second, you can use the frequency of genotypes within populations to impute missing data for other samples. Finally, population assignments can be used to color points when plotting your results. You can assign individual samples to populations using an imap
dictionary like below. We also create a
minmap
dictionary stating that we want to require 50% coverage in each population.
[4]:
# group individuals into populations
imap = {
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi": ["MXED8", "MXGT4", "TXGR3", "TXMD3"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
}
# require that 50% of samples have data in each group
minmap = {i: 0.5 for i in imap}
[5]:
# ipa.snps_extracter(data).names
Enter data file and params
The pca
analysis object takes input data as the .snps.hdf5 file produced by ipyrad. All other parameters are optional. The imap dictionary groups individuals into populations and minmap can be used to filter SNPs to only include those that have data for at least some proportion of samples in every group. The mincov option works similarly, it filters SNPs that are shared across less than some proportion of all samples (in contrast to minmap this does not use imap groupings).
When you init the object it will load the data and apply filtering. The printed output tells you how many SNPs were removed by each filter and the remaining amount of missing data after filtering. These remaining missing values are the ones that will be filled by imputation. The options for imputing data are listed further down in this tutorial. Here we are using the “sample” method, which I generally recommend.
[6]:
# init pca object with input data and (optional) parameter options
pca = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=0.75,
impute_method="sample",
)
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 110150
Filtered (minmap): 112898
Filtered (combined): 138697
Sites after filtering: 211217
Sites containing missing values: 183722 (86.98%)
Missing values in SNP matrix: 501031 (8.79%)
Imputation: 'sampled'; (0, 1, 2) = 77.1%, 10.7%, 12.2%
Run PCA
Call .run()
and to generate the PC axes and the variance explained by each axis. The results are stored in your analysis object as dictionaries under the attributes .pcaxes
and .variances
. Feel free to take these data and plot them using any method you prefer. The code cell below shows how to save the data to a CSV file, and also to view the PC data as a table.
[6]:
# run the PCA analysis
pca.run()
Subsampling SNPs: 28369/211217
[7]:
# store the PC axes as a dataframe
df = pd.DataFrame(pca.pcaxes[0], index=pca.names)
# write the PC axes to a CSV file
df.to_csv("pca_analysis.csv")
# show the first ten samples and the first 10 PC axes
df.iloc[:10, :10].round(2)
[7]:
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|
BJSB3 | 45.26 | 52.07 | -10.97 | -35.64 | 2.39 | -0.39 | 3.12 | 0.14 | 2.59 | 0.07 |
BJSL25 | 43.05 | 48.69 | -9.43 | -30.46 | 1.44 | 0.72 | 2.19 | -0.17 | 1.07 | 0.63 |
BJVL19 | 43.01 | 48.83 | -10.85 | -31.74 | 2.47 | 0.47 | 2.57 | -0.45 | 2.36 | -0.62 |
BZBB1 | 39.00 | -48.77 | 4.83 | 0.08 | 10.37 | -22.45 | 2.96 | -2.56 | 3.47 | -0.63 |
CRL0030 | 39.69 | -49.19 | 3.03 | -0.23 | 8.56 | -13.14 | 0.13 | 1.22 | 2.31 | 0.07 |
CUCA4 | 12.60 | -34.85 | -3.61 | -4.60 | -21.33 | 40.63 | -0.54 | 15.26 | 0.97 | -4.93 |
CUSV6 | 8.41 | -33.68 | -3.85 | -5.30 | -21.44 | 42.09 | 3.06 | -18.19 | 0.33 | 8.01 |
CUVN10 | 13.45 | -35.30 | -1.01 | -3.65 | -14.59 | 27.80 | 1.94 | 1.44 | -3.39 | -7.37 |
FLAB109 | -31.42 | -0.51 | -20.02 | -3.05 | -25.75 | -16.48 | -2.72 | -3.17 | 1.67 | -1.13 |
FLBA140 | -30.57 | 2.87 | 25.80 | -8.74 | 3.68 | 0.65 | -0.59 | 3.00 | -0.39 | 0.15 |
Run PCA and plot results.
When you call .run()
a PCA model is fit to the data and two results are generated: (1) samples weightings on the component axes; (2) the proportion of variance explained by each axis. For convenience we have developed a plotting function that can be called as .draw()
to plot these results (generated with `toyplot
<https://toyplot.rtfd.io>`__). The first two arguments to this function are the two axes to be plotted. By default this plotting function will use the imap
information
to color points and create a legend.
[8]:
# plot PC axes 0 and 2
pca.draw(0, 2);
Subsampling SNPs
By default run()
will randomly subsample one SNP per RAD locus to reduce the effect of linkage on your results. This can be turned off by setting subsample=False
, like in the plot above. When using subsampling you can set the random seed to make your results repeatable. The results here subsample 29K SNPs from a possible 228K SNPs, but the final results are quite similar to above.
[9]:
# plot PC axes 0 and 2 with no subsampling
pca.run(subsample=False)
pca.draw(0, 2);
Subsampling with replication
Subsampling unlinked SNPs is generally a good idea for PCA analyses since you want to remove the effects of linkage from your data. It also presents a convenient way to explore the confidence in your results. By using the option nreplicates
you can run many replicate analyses that subsample a different random set of unlinked SNPs each time. The replicate results are drawn with a lower opacity and the centroid of all the points for each sample is plotted as a black point. You can hover over
the points with your cursor to see the sample names pop-up.
[10]:
# plot PC axes 0 and 2 with many replicate subsamples
pca.run(nreplicates=25, seed=12345)
pca.draw(0, 2);
Subsampling SNPs: 28369/211217
Advanced: Imputation algorithms:
We offer three algorithms for imputing missing data:
sample: Randomly sample genotypes based on the frequency of alleles within (user-defined) populations (imap).
kmeans: Randomly sample genotypes based on the frequency of alleles in (kmeans cluster-generated) populations.
None: All missing values are imputed with zeros (ancestral allele).
No imputation (None)
The None option will almost always be a bad choice when there is any reasonable amount of missing data. Missing values will all be filled as zeros (ancestral allele) – this is what many other PCA tools do as well. I show it here for comparison to the imputed results, which are better. The two points near the top of the plot are samples with the most missing data that are erroneously grouped together. The rest of the samples also form much less clear clusters than in the other examples where we use imputation or stricter filtering options.
[11]:
# init pca object with input data and (optional) parameter options
pca1 = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=0.25,
impute_method=None,
)
# run and draw results for impute_method=None
pca1.run(nreplicates=25, seed=123)
pca1.draw(0, 2);
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 9517
Filtered (minmap): 112898
Filtered (combined): 121048
Sites after filtering: 228866
Sites containing missing values: 201371 (87.99%)
Missing values in SNP matrix: 640419 (10.36%)
Imputation (null; sets to 0): 100.0%, 0.0%, 0.0%
Subsampling SNPs: 29695/228866
No imputation but stricter filtering (mincov)
Here I do not allow for any missing data (mincov
=1.0). You can see that this reduces the number of total SNPs from 349K to 10K. The final reslult is not too different from our first example, but seems a little less smooth. In most data sets it is probably better to include more data by imputing some values, though. Many data sets may not have as many SNPs without missing data as this one.
[12]:
# init pca object with input data and (optional) parameter options
pca2 = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=1.0,
impute_method=None,
)
# run and draw results for impute_method=None and mincov=1.0
pca2.run(nreplicates=25, seed=123)
pca.draw(0, 2);
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 321628
Filtered (minmap): 112898
Filtered (combined): 322419
Sites after filtering: 27495
Sites containing missing values: 0 (0.00%)
Missing values in SNP matrix: 0 (0.00%)
Subsampling SNPs: 6675/27495
Kmeans imputation (integer)
The kmeans clustering method allows imputing values based on population allele frequencies (like the sample method) but without having to a priori assign individuals to populations. In other words, it is meant to reduce the bias introduced by assigning individuals yourself. Instead, this method uses kmeans clustering to group individuals into “populations” and then imputes values based on those population assignments. This is accomplished through iterative clustering, starting by using only SNPs that are present across 90% of all samples (this can be changed with the topcov param) and then allowing more missing data in each iteration until it reaches the mincov parameter value.
This method works great especially if you have a lot of missing data and fear that user-defined population assignments will bias your results. Here it gives super similar results to our first plots using the “sample” impute method, suggesting that our population assignments are not greatly biasing the results. To use K=7 clusters you simply enter impute_method=7
.
[13]:
# kmeans imputation
pca3 = ipa.pca(
data=data,
imap=imap,
minmap=minmap,
mincov=0.5,
impute_method=7,
)
# run and draw results for kmeans clustering into 7 groups
pca3.run(nreplicates=25, seed=123)
pca3.draw(0, 2);
Kmeans clustering: iter=0, K=7, mincov=0.9, minmap={'global': 0.5}
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 222081
Filtered (minmap): 29740
Filtered (combined): 225958
Sites after filtering: 123956
Sites containing missing values: 96461 (77.82%)
Missing values in SNP matrix: 142937 (4.27%)
Imputation: 'sampled'; (0, 1, 2) = 76.7%, 15.0%, 8.3%
{0: ['FLCK216', 'FLSA185'], 1: ['BJSB3', 'BJSL25', 'BJVL19'], 2: ['FLAB109', 'FLCK18', 'FLMO62', 'FLSF47', 'FLSF54', 'FLWO6'], 3: ['BZBB1', 'CRL0030', 'CUVN10', 'HNDA09', 'MXSA3017'], 4: ['MXED8', 'MXGT4', 'TXGR3', 'TXMD3'], 5: ['FLBA140', 'FLSF33', 'LALC2', 'SCCU3', 'TXWV2'], 6: ['CUCA4', 'CUSV6']}
Kmeans clustering: iter=1, K=7, mincov=0.8, minmap={0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5, 5: 0.5, 6: 0.5}
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 131220
Filtered (minmap): 111129
Filtered (combined): 150798
Sites after filtering: 199116
Sites containing missing values: 171621 (86.19%)
Missing values in SNP matrix: 427659 (7.95%)
Imputation: 'sampled'; (0, 1, 2) = 77.6%, 10.0%, 12.5%
{0: ['FLAB109', 'FLCK18', 'FLMO62', 'FLSF47', 'FLSF54', 'FLWO6'], 1: ['BZBB1', 'CRL0030', 'CUVN10', 'HNDA09', 'MXSA3017'], 2: ['FLBA140', 'FLSF33', 'LALC2', 'SCCU3', 'TXWV2'], 3: ['MXED8', 'MXGT4', 'TXGR3', 'TXMD3'], 4: ['BJSB3', 'BJSL25', 'BJVL19'], 5: ['FLCK216', 'FLSA185'], 6: ['CUCA4', 'CUSV6']}
Kmeans clustering: iter=2, K=7, mincov=0.7, minmap={0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5, 5: 0.5, 6: 0.5}
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 76675
Filtered (minmap): 111129
Filtered (combined): 124159
Sites after filtering: 225755
Sites containing missing values: 198260 (87.82%)
Missing values in SNP matrix: 606805 (9.96%)
Imputation: 'sampled'; (0, 1, 2) = 77.4%, 10.1%, 12.5%
{0: ['FLCK216', 'FLSA185'], 1: ['BJSB3', 'BJSL25', 'BJVL19'], 2: ['FLAB109', 'FLCK18', 'FLMO62', 'FLSF47', 'FLSF54', 'FLWO6'], 3: ['BZBB1', 'CRL0030', 'CUVN10', 'HNDA09', 'MXSA3017'], 4: ['MXED8', 'MXGT4', 'TXGR3', 'TXMD3'], 5: ['FLBA140', 'FLSF33', 'LALC2', 'SCCU3', 'TXWV2'], 6: ['CUCA4', 'CUSV6']}
Kmeans clustering: iter=3, K=7, mincov=0.6, minmap={0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5, 5: 0.5, 6: 0.5}
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 52105
Filtered (minmap): 111129
Filtered (combined): 119932
Sites after filtering: 229982
Sites containing missing values: 202487 (88.04%)
Missing values in SNP matrix: 646076 (10.40%)
Imputation: 'sampled'; (0, 1, 2) = 77.3%, 10.1%, 12.5%
{0: ['FLBA140', 'FLSF33', 'LALC2', 'SCCU3', 'TXWV2'], 1: ['BJSB3', 'BJSL25', 'BJVL19'], 2: ['BZBB1', 'CRL0030', 'HNDA09', 'MXSA3017'], 3: ['FLCK216', 'FLSA185'], 4: ['FLAB109', 'FLCK18', 'FLMO62', 'FLSF47', 'FLSF54', 'FLWO6'], 5: ['MXED8', 'MXGT4', 'TXGR3', 'TXMD3'], 6: ['CUCA4', 'CUSV6', 'CUVN10']}
Kmeans clustering: iter=4, K=7, mincov=0.5, minmap={0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5, 5: 0.5, 6: 0.5}
Samples: 27
Sites before filtering: 349914
Filtered (indels): 0
Filtered (bi-allel): 13001
Filtered (mincov): 29740
Filtered (minmap): 115494
Filtered (combined): 123595
Sites after filtering: 226319
Sites containing missing values: 198824 (87.85%)
Missing values in SNP matrix: 627039 (10.26%)
Imputation: 'sampled'; (0, 1, 2) = 77.2%, 10.4%, 12.4%
Subsampling SNPs: 29415/226319
Save plot to PDF
You can save the figure as a PDF or SVG automatically by passing an outfile
argument to the .draw()
function.
[14]:
# The outfile must end in either `.pdf` or `.svg`
pca.draw(outfile="mypca.pdf")
Advanced: Missing data per sample
You can view the proportion of missing data per sample by accessing the .missing
data table from your pca
analysis object. You can see that most samples in this data set had 10% missing data or less, but a few had 20-50% missing data. You can hover your cursor over the plot above to see the sample names. It seems pretty clear that samples with huge amounts of missing data do not stand out at outliers in these plots like they did in the no-imputation plot. Which is great!
[15]:
# .missing is a pandas DataFrame
pca3.missing.sort_values(by="missing")
[15]:
missing | |
---|---|
BJSL25 | 0.03 |
BJVL19 | 0.03 |
FLBA140 | 0.03 |
CRL0030 | 0.04 |
LALC2 | 0.04 |
FLSF54 | 0.04 |
CUVN10 | 0.06 |
FLAB109 | 0.06 |
MXGT4 | 0.07 |
MXED8 | 0.08 |
CUSV6 | 0.08 |
HNDA09 | 0.08 |
FLSF33 | 0.08 |
BJSB3 | 0.09 |
FLSF47 | 0.09 |
MXSA3017 | 0.09 |
FLMO62 | 0.10 |
TXMD3 | 0.10 |
FLWO6 | 0.11 |
FLCK18 | 0.11 |
TXGR3 | 0.11 |
BZBB1 | 0.11 |
FLCK216 | 0.11 |
FLSA185 | 0.13 |
CUCA4 | 0.14 |
SCCU3 | 0.23 |
TXWV2 | 0.55 |
Advanced: TSNE and other dimensionality reduction methods
While PCA plots are very informative, it is sometimes difficult to visualize just how well separated all of your samples are since the results are in many dimensions. A popular tool to further examine the separation of samples is t-distribution stochastic neighbor embedding (TSNE). We’ve implemented this in the pca
tool as well, where it first decomposes the data using pca, and then TSNE on the PC axes. The results will vary depending on the parameters and random seed, and so you cannot plot
replicates runs using this method. And it is important to explore parameter values to find something that works well.
[16]:
pca.run_tsne(subsample=True, perplexity=4.0, n_iter=100000, seed=123)
Subsampling SNPs: 28369/211217
[17]:
pca.draw();
Advanced: UMAP dimensionality reduction
From the UMAP docs: “low values of n_neighbors will force UMAP to concentrate on very local structure (potentially to the detriment of the big picture), while large values will push UMAP to look at larger neighborhoods of each point when estimating the manifold structure of the data”
The min_dist parameter controls how tightly UMAP is allowed to pack points together. It, quite literally, provides the minimum distance apart that points are allowed to be in the low dimensional representation
[33]:
pca.run_umap(subsample=False, n_neighbors=12, min_dist=0.1)
[34]:
pca.draw();