Via Bioinformatics Zen:
"A big computer, a complex algorithm and a long time does not equal science." -- Robert Gentleman
lunedì 21 luglio 2008
giovedì 10 luglio 2008
Parsing problem solved thanks to R-Help mailing list
Recently I had the necessity to parse several HUGE text files (~6M lines ~ 600Mb file size) not formatted in a standard way (so not easy import via scan, read.table etc.).
Because of the size of these files I have to avoid loops and find a way to vectorize my problem.
After several hours spent trying to solve this problem without success, I decided to send an help request to the R-help list. In no time i got the answer to this very problematic (at least for me) exercise :-)
You can read the full story here.
I REALLY love the R-Help mailing list! Thanks Guys!
Because of the size of these files I have to avoid loops and find a way to vectorize my problem.
After several hours spent trying to solve this problem without success, I decided to send an help request to the R-help list. In no time i got the answer to this very problematic (at least for me) exercise :-)
You can read the full story here.
I REALLY love the R-Help mailing list! Thanks Guys!
domenica 1 giugno 2008
See you in Bressanone in 2 weeks!
Computational and Statistical Aspects
of Microarray Analysis (CSAMA08)
Bressanone-Brixen - June 15th-20th 2008
of Microarray Analysis (CSAMA08)
Bressanone-Brixen - June 15th-20th 2008
martedì 6 maggio 2008
Replacing Tabs with spaces and back in VIM
Not specifically R-related.
If you need to remove all the blank spaces from a file and replace them with tabs for easy importing in R, you can do it in VIM in no time. In normal mode type:
The full story here
If you need to remove all the blank spaces from a file and replace them with tabs for easy importing in R, you can do it in VIM in no time. In normal mode type:
:set noexpandtab
:%retab!The full story here
mercoledì 16 aprile 2008
R installing on Unix/Linux - no root access
Thanks and credit to Joern Toedling for this useful and clear how-to!
From The Bioconductor Digest, Vol 62, Issue 14:
You do not need to have root access to a machine to install your own
versions of R and your favourite packages there. This is how to do it:
1. downloaded the tar.gz of the development version of R from
ftp://ftp.stat.math.ethz.ch/Software/R/
2. uncompress it to a directory you have write access to, say ~/local/R
3. change into the uncompressed directory, ~/local/R/R-devel
4. run "./configure"
5. run "make"
Afterwards you can start R by executing ~/local/R/R-devel/bin/R;
to simplify that either add the bin directory to your path or create an alias for R
You do not need to run "make install" to work with R.
For packages,
1. create a directory in which you want the packages installed, e.g. ~/local/rpacks
2. create an evironment variable R_LIBS that holds the path to that directory, e.g. "setenv R_LIBS=${HOME}/local/rpacks" with that directory and a C-shell (use export with a Bash shell)
This environment variable tells R where to look first for installed packages and where to install packages when using "install.packages" or "biocLite".
R_LIBS is safe to use, since it only extends the path to look for packages and does not replace the default R library path.
I would recommend to add the alias for starting R and the R_LIBS
definition to your shell startup file (~/.cshrc or ~/.bashrc).
From The Bioconductor Digest, Vol 62, Issue 14:
You do not need to have root access to a machine to install your own
versions of R and your favourite packages there. This is how to do it:
1. downloaded the tar.gz of the development version of R from
ftp://ftp.stat.math.ethz.ch/Software/R/
2. uncompress it to a directory you have write access to, say ~/local/R
3. change into the uncompressed directory, ~/local/R/R-devel
4. run "./configure"
5. run "make"
Afterwards you can start R by executing ~/local/R/R-devel/bin/R;
to simplify that either add the bin directory to your path or create an alias for R
You do not need to run "make install" to work with R.
For packages,
1. create a directory in which you want the packages installed, e.g. ~/local/rpacks
2. create an evironment variable R_LIBS that holds the path to that directory, e.g. "setenv R_LIBS=${HOME}/local/rpacks" with that directory and a C-shell (use export with a Bash shell)
This environment variable tells R where to look first for installed packages and where to install packages when using "install.packages" or "biocLite".
R_LIBS is safe to use, since it only extends the path to look for packages and does not replace the default R library path.
I would recommend to add the alias for starting R and the R_LIBS
definition to your shell startup file (~/.cshrc or ~/.bashrc).
mercoledì 27 febbraio 2008
Classification: a quick and dirty example
## Thanks to the UCI repository Magic Gamma telescope data set
magic04 = read.table("http://archive.ics.uci.edu/ml/machine-learning-databases/magic/magic04.data", header = F, sep=",")
# split the data set in test and training set
split.data <- function(data, p = 0.7, s = 666){
set.seed(s)
index <- sample(1:dim(data)[1])
train <- data[index[1:floor(dim(data)[1] * p)], ]
test <- data[index[((ceiling(dim(data)[1] * p)) + 1):dim(data)[1]], ]
return(list(train = train, test = test))
}
dati = split.data(magic04, p = 0.7)
train = dati$train
test = dati$test
# SVM training just for fun
library(e1071)
model <- svm(train[,1:10],train[,11], probability = T)
# prediction on the test set
pred <- predict(model, test[,1:(dim(test)[[2]]-1)], probability = T)
# Check the predictions
table(pred,test[,dim(test)[2]])
pred.prob <- attr(pred, "probabilities")
pred.to.roc <- pred.prob[, 1]
# performance assessment
library(ROCR)
pred.rocr <- prediction(pred.to.roc, as.factor(test[,(dim(test)[[2]])]))
perf.rocr <- performance(pred.rocr, measure = "auc", x.measure = "cutoff")
cat("AUC =",deparse(as.numeric(perf.rocr@y.values)),"\n")
perf.tpr.rocr <- performance(pred.rocr, "tpr","fpr")
plot(perf.tpr.rocr, colorize=T)
giovedì 10 gennaio 2008
Hello World for Clustering methods
A hello world program can be a useful sanity test to make sure that the procedure/methods you are analyzing "works" at least for very basic tasks. For this purpose, I create an artificial data set from 4 different 2-dimensional normal distributions to check how well the 4 clusters can be recognized by common clustering methods.
set1 <- matrix(cbind(rnorm(100,0,2),rnorm(100,0,2)),100,2)
set2 <- matrix(cbind(rnorm(100,0,2),rnorm(100,8,2)),100,2)
set3 <- matrix(cbind(rnorm(100,8,2),rnorm(100,0,2)),100,2)
set4 <- matrix(cbind(rnorm(100,8,2),rnorm(100,8,2)),100,2)
dati <- list(values=rbind(set1,set2,set3,set4),classes=c(rep(1,100),rep(2,100),rep(3,100),rep(4,100))) # clustering - common methods
op <- par(mfcol = c(2, 2))
par(las =1)
plot(dati$values, col = as.integer(dati$classes), xlim=c(-6,14), ylim = c(-6,14), xlab="", ylab="", main = "True Groups")
party <- kmeans(dati$values,4)
plot(dati$values, col = party$cluster, xlab = "", ylab = "", main = "kmeans")
hc = hclust(dist(dati$values), method = "ward")
memb <- cutree(hc, k = 4)
plot(dati$values, col = memb, xlab = "", ylab = "", main = "hclust Euclidean ward") hc = hclust(dist(dati$values), method = "complete")
memb <- cutree(hc, k = 4)
plot(dati$values, col = memb, xlab = "", ylab = "", main = "hclust Euclidean complete")
par(op)
Iscriviti a:
Commenti (Atom)