Obviously, Varnish would be the answer for it but since our infrastructure architecture is already complex, there was a need to do it without introducing yet another layer just yet.
So, looking over what we can do, I've found out how nginx can cache things directly to memcached. Of course, it's always easy when you read about them but it turns out that there's no clear tutorial for how to actually do it.
As such, I've headed on the Internet trying to find a response to my question and quickly enough, there was a post about the same problem with no answer http://stackoverflow.com/questions/25639445/nginx-caching-response-from-remote-server-with-memcached/
Long story short, here's the solution I've came up with, maybe anyone else has a better one?
Add this line under nginx.conf (it adds support for Lua, see below why)
lua_package_path '/usr/local/lib/lua/?.lua';
site config (in my case default):
upstream memcached {
server 127.0.0.1:11211;
keepalive 32;
}
server {
listen 8080 default_server;
root /usr/share/nginx/html;
index index.fhtml index.fhtm;
# Make site accessible from http://localhost/
server_name localhost;
location = /memc {
internal;
memc_connect_timeout 100ms;
memc_send_timeout 100ms;
memc_read_timeout 100ms;
memc_ignore_client_abort on;
set $memc_key $arg_key;
set $memc_exptime 300;
memc_pass memcached;
}
location /memc-stats {
add_header Content-Type text/plain;
set $memc_cmd stats;
memc_pass memcached;
}
location / {
set_by_lua $key 'return ngx.md5(ngx.arg[1])' $request_uri;
srcache_fetch GET /memc key=$key;
srcache_methods GET;
srcache_store_statuses 200 301 302;
error_page 403 404 502 504 = @fallback;
}
location @fallback {
proxy_pass http://127.0.0.1:80$request_uri;
set_by_lua $key 'return ngx.md5(ngx.arg[1])' $request_uri;
srcache_request_cache_control off;
srcache_store PUT /memc key=$key;
}
}
My setup is like on Ubuntu 14.04, nginx running on port 8080 and Apache on 80 (just to test this) with nginx 1.7.5 compiled with the following arguments in "debian/rules" under "full_configure_flags"
full_configure_flags := \
$(common_configure_flags) \
--with-http_addition_module \
--with-http_dav_module \
--with-http_geoip_module \
--with-http_gzip_static_module \
--with-http_image_filter_module \
--with-http_secure_link_module \
--with-http_spdy_module \
--with-http_sub_module \
--with-http_xslt_module \
--with-mail \
--with-mail_ssl_module \
--with-http_ssl_module \
--with-http_stub_status_module \
--add-module=/opt/nginx/modules/ngx_devel_kit-0.2.19 \
--add-module=/opt/nginx/modules/set-misc-nginx-module-0.26 \
--add-module=/opt/nginx/modules/memc-nginx-module-0.15 \
--add-module=/opt/nginx/modules/srcache-nginx-module-0.28 \
--add-module=$(MODULESDIR)/headers-more-nginx-module \
--add-module=$(MODULESDIR)/nginx-auth-pam \
--add-module=$(MODULESDIR)/nginx-cache-purge \
--add-module=$(MODULESDIR)/nginx-dav-ext-module \
--add-module=$(MODULESDIR)/nginx-echo \
--add-module=$(MODULESDIR)/nginx-http-push \
--add-module=$(MODULESDIR)/nginx-lua \
--add-module=$(MODULESDIR)/nginx-upload-progress \
--add-module=$(MODULESDIR)/nginx-upstream-fair \
--add-module=$(MODULESDIR)/ngx_http_substitutions_filter_module
I've compiled Lua and other modules, as you can see. The need for Lua was because I wanted to have a consistent way to hash the values for the memcached keys without having to worry about what will happen if someone will send some unexpected values as well as be able to hash it in the same way from the backend.
EDIT:
You can get the modules that I've added from here:
- Nginx Development Kit https://github.com/simpl/ngx_devel_kit
- ngx_set_misc https://github.com/openresty/set-misc-nginx-module
- ngx_memc https://github.com/openresty/memc-nginx-module
- ngx_srcache https://github.com/openresty/srcache-nginx-module
EDIT:
You can get the modules that I've added from here:
- Nginx Development Kit https://github.com/simpl/ngx_devel_kit
- ngx_set_misc https://github.com/openresty/set-misc-nginx-module
- ngx_memc https://github.com/openresty/memc-nginx-module
- ngx_srcache https://github.com/openresty/srcache-nginx-module
No comments :
Post a Comment