---
title: "How-to xts"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{How-to xts}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "VIGNETTE-"
)

```

The following gives an example on how the process of selecting and retrieving the necessary files for processing is done. We start by specifying a target directory and target file names. Here, these are created in a directory that is named based on the download date within a folder called "data".

```{r setup, eval=TRUE, echo=FALSE, warning=FALSE, message=FALSE}
library(FFdownload)
outd <- paste0(tempdir(),"/",format(Sys.time(), "%F_%H-%M"))
outfile <- paste0(outd,"FFData_xts.RData")
listfile <- paste0(outd,"FFList.txt")
```
```{r setup2, eval=FALSE, echo=TRUE}
library(FFdownload)
outd <- paste0("data/",format(Sys.time(), "%F_%H-%M"))
outfile <- paste0(outd,"FFData_xts.RData")
listfile <- paste0(outd,"FFList.txt")
```

Next, we download a list of all available files on [Kenneth French's website](https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html). We exclude all daily files to keep the list short.

```{r xts_list_save}
FFdownload(exclude_daily=TRUE,download=FALSE,download_only=TRUE,listsave=listfile)
read.delim(listfile,sep = ",")[c(1:4,73:74),]
```

From this list we select the files to download. In our case we use the 3 Fama-French-Factors:

*  "F-F_Research_Data_Factors_CSV"
*  "F-F_Momentum_Factor_CSV"

and download these files without processing them (for the sake of showing how the package works).

```{r xts_download}
inputlist <- c("F-F_Research_Data_Factors_CSV","F-F_Momentum_Factor_CSV")
FFdownload(exclude_daily=TRUE, tempd=outd, download=TRUE, download_only=TRUE, inputlist=inputlist)
list.files(outd)
```

Now we process these downloaded files and create a final "RData" file with a certain list structure from it. Due to the separation of the downloading and processing stage this can be done repeatedly for any data set saved at a certain point in time in a relevant folder. 

```{r xts_processing}
FFdownload(exclude_daily=TRUE, tempd=outd, download=FALSE, download_only=FALSE, inputlist=inputlist, output_file = outfile)
```

Let us check the structure of the created list (after loading into the current workspace).

```{r xts_load}
load(outfile)
ls.str(FFdata)
```

Now we process the data using code provided to me by [Joshua Ulrich (the developer of xts)](https://twitter.com/joshua_ulrich/status/1584950407335321601). Therein we merge all monthly `xts`-files, select data as off 1963, divide by $100$ because returns are given in percent, caluclate monthly returns and finally plot the resulting `xts`.

```{r xts_process, eval=FALSE, echo=TRUE}
monthly <- do.call(merge, lapply(FFdata, function(i) i$monthly$Temp2))
monthly_1960 <- na.omit(monthly)["1963/"]
monthly_returns <- cumprod(1 + monthly_1960/100) - 1
plot(monthly_returns)
```
```{r xts_process2, eval=TRUE, echo=FALSE, out.width="100%", fig.width=8, fig.height=4}
monthly <- do.call(merge, lapply(FFdata, function(i) i$monthly$Temp2))
monthly_1960 <- na.omit(monthly)["1963/"]
monthly_returns <- cumprod(1 + monthly_1960/100) - 1
plot(monthly_returns, col = viridis::viridis(5, direction = -1), legend.loc="topleft", lwd=2, main="Fama-French & Carhart Factor Wealth Index")
```