#Codeigniter #aws-sdk A few days of working with Codeigniter reaffirm that many developers working in codeigniter use frameworks to speed development and ignore well factored code. One thing of note is getting the sparks aws-sdk working to put an object into S3. it took me a while to work it out but the essence has been
$s3Obj=array('acl'=>AmazonS3::ACL_PUBLIC,'fileUpload'=> ( $imgPath ) ); $s3->create_object($amazon_settings->aws_bucket_name, $amazon_settings->aws_bucket_path . $abs_imgpath ,$s3Obj); // imgPath was: /var/www/codeigniter/media/template/base_template/header.jpg // aws_bucket_name : is the name of the bucket // aws_bucket_path and imagepath are the full path from the bucket root to the file.; e.g. 'release/media/template/base_template/header.jpg
That wasnt actually documented anywhere in the library from sparks .
#AWS #PHP #BASH
Its not event 10:20 and this morning i have edited a legacy php system to include an extra column of data which apparently has never appeared on a export and has only just been discovered. Ive recovered a lost xls file for a client because they are sloppy clickers and had dragged it to another share. I have configured a jailed user account with their own secure key and access to a folder on the filesystem so they can upload files directly. I have checked in on a STEM event I am helping deliver this evening and I have not even had my second cup of coffee.
#AWS #Wordpress there are many schools of thought on automation of tasks but the reality is when I have to do a new wordpress install on a dev server it takes me all of 10 minutes and the act of creating apache configs and wordpress installations means I keep an eye on what is still running or utilised and ask the question . do we need it. in over two decades of automation so many tools have come and gone and few have replaced the mark one eyeball and attention to the installation directories.
#AWS #Cloudwatch #Bash ; For those interested in the relevant AWS command line content the link is included. Things I discovered is that you need to ensue that your source machine running the instance requests runs ntpdate to ensure its internal clock is consistently uptodate. I had too many problems with the times otherwise. I have another script on the remote EC2 webinstance which can pick up the logs and look over an hour of content and return the lines and bytes requested. all Written to csv for now.
#AWS #BASH , and it is working … now to leave it a few hours and see if its correct. The collection of basic Cloudwatch metrics for a number of instances in addition to which a script which gathers the count of lines in the log file and the sum of bytes requested for that instance. I am hoping that the Loadbalancer requests will be far less than the Apache Log Lines which then informs me of the value of my varnish caches. I may also grab some varnish cache stats.
#AWS #bash nothing like a good distraction and today it was getting AWS Command line working so that I could get the metrics for our instances and report back every hour.Sure here are other services that offer this but I dont want to add more external pressure to an internal routine. this way reduces the amount of open things into the backend. more to post on this later once I have it wrapped and operational
#Varnish #AWS Tonights question how to ensure that urls which did not start www or requests for port http:80 were redirected to the respective https://www address in varnish thereby saving a trip to apache where it would waste network and processor time. It turns out this is not too hard to deliver on providing your AWS loadbalancer is pointing to port 80 on the varnish instance; details of the edits in the PasteBin
#AWS the loadbalancers on Amazon WebServices require you to pass the Public Certificate ( the file you got back from the supplier/registrar ) the Private Certificate/key ( the file you created with openssl options ) and you optionally ( hint not really ) the Ceritifcate chain. In this case It turns out that the Registrar may provide you their certificate chain and posting that into the final box means you have an SSL endpoint at the loadbalancer which handles the SSL between your clients and your apache instance. You can now configure the listeners to listen on port 443 out the outside and pass them to port 80 ( or whatever you need ) on the inside. Okay the communication between Loadbalancer and AWS instance is unencrypted in its journey but youve passed the network cost upstream of your app. It works well enough to allow me to run Varnish behind the loadbalancers and they in turn load a couple of instances.
#AWS I do quite a bit of work with Amazon Web Services and this morning has mostly involved reviewing if the current deployment of services from AWS in China is suitable for us to deliver content currently hosted in Singapore. The problem to be resolved is that we really need a way for DNS to handle Zone information based on the origin request of the client. Anything in China needs to be pointed to china.
Chinanetcloud proves bloody helpful here to a degree but the reality is that I think this new few months will see me possibly building out a DNS server with BIND 9.10 it has the features I think we will need what I wonder is how to robustly handle the queries. so thats on my reading list for February
I am back home for some time. Realized, the internet connection here does not allow connecting to any ports other than 80 and 443 so basically that meant no connection to my AWS instance. Seems like the network admins aren’t that smart, just added port 443 to the SSH daemon and I am back at work again. Apart from that, I am reading a few research papers these days to see the trends in mobile computing. Had a look at Raft for leader election. #raft #mobile #aws #homesweethome