Posts

How to restore PostgreSQL database dump

They'res a magnitude of methods to restore PostgreSQL dumps for a typical plaintext dump "psql" can do a pretty neat Job In few words a PostgreSQL plaintext dump is nothing but a corpus embodied with SQL statements, which when executed tables are created data gets retrieved, inserted and restored. So assuming you have a backup dump saved somewhere on your system open your terminal. And type; psql -U {{username}} < {{dump.sql}} That's it you've imported a database dump

how to dump a postgresql database using from the terminal

PostgreSQL is a relational database management system and arguably the best database management around. postgreSQL comes with utilities used to manipulate databases from the terminal "psql" being the most popular utility I'm sure you've read somewhere or used "psql" dumping "backing" databases is done using PostgreSQL's pg_dump utility In your terminal type; pg_dump <database name> > <output> For instance to backup a database named "books" to a file named "books.sql" do; pg_dump "books" > books.sql How to exclude database data from backups To exclude database data from your output file you should include "-s" flag in your command pg_dump -s "books" > books.sql

Introduction to webshare API (navigator.share)

Image
navigator.share is a JavaScript API providing a sharing interface similar to native apps webshare API is supported on most updated mobile browsers Including Google chrome for Android Sharing is done by passing an Object with parameters "url", "text" or "title" to navigator.share function. For a webshare request to succeed you must supply at lest one of the parameters navigator.share returns a Promise upon being called "upon navigator.share({     "url": document.location,     "text": "I'm an example",     "title": document.title }) .then(resp => console.info("Successfully shared")) .catch(errors => console.error(errors))

How to stop moz dotbot from accessing your website

Image
DotBot is Moz's web crawler, it gathers web data for the Moz Link Index Dotbot obeys robots.txt rules before accessing your host machine so the easiest way to stop dotbot is by adding robots.txt rules that limit dotbot activities To forbid a directory let's say "login" add; User-agent: dotbot Disallow: /login/ Upon reading and parsing directives above moz dotbot won't dare access you sites login sub directory in it's craw routine To forbid an entire website's access include directives below; User-agent: dotbot Disallow: / Alternatively you can limit crawl rate by adding directives below "time is probably in seconds" User-agent: dotbot Crawl-delay: 10 I've attached an nginx log it's a trail left by dotbot along with it's ip and moz support e-mail address; 216.244.66.194 - - [19/Mar/2020:15:16:29 +0000] "GET /index.html HTTP/1.1" 200 13433 "-" "Mozilla/5.0 (compat

How to set content disposition header for nginx server to force content download

Image
Nginx (pronounced as engine x) is an HTTP server, reverse proxy server, as well as a mail proxy server Nginx is well known for its high performance, stability, efficiency, rich feature set, simple configuration, and low resource consumption. content disposition headers direct web browsers to save content instead of attempting to render it To set content disposition open your desired server configuration in "/etc/nginx/sites-available/" within your configuration add code below within your server {block} location ~* (mp3|ogg|wav)$ {     add_header Content-Disposition "attachment"; } mp3, ogg and wav are example file extensions matched by regular expressions rules Test configuration by acessing mp3, ogg and wav files from your webserver Alternatively you can force custom filenames as shown below "$1" substitutes filenames sent to clients. As an example its value corresponds to the requested file's extension lo

Reasons why you shouldn't consider hosting on github pages

Image
Github pages is a service for deployment of static websites normally software documentation and blogs Github pages is available on all public github repositories and on premium accounts it extends to private repositories as well. So what's not to like about it. hmmmm think about it. that's too good to be true It's not a secret that "the only free cheese is served on a mouse trap" 1. "Downtime due to maintenance" Every now and then github pages websites undergo an annoying and imaginary maintenance that causes undesirable user experience especially for professional websites. 2. "Slow deployment" I've been using github pages to host my static website with less than 10,000 pages deployment process knocked my entire site offline for 10 mins again that's unnecessary downtime sometimes it extended to days 3. "uninvited cloners" So you put your site out there few days later 50 clones uninvited unwelcom

How to set content disposition headers for express nodejs apps

Image
In express nodejs apps you can force the user agent "browser" to rather download content instead of displaying or attempting to render given content within the browser. In this example assuming you're using express.static to serve content pass an Object as second argument to express.static function An Object with configuration parameters; const config = { setHeaders: res => res.set('Content-disposition',  'attachment') } In your code replace "path/to/media/" with the path pointing to the static content. app.use('/media/', express.static('path/to/media/', config)) To test try accessing content at /media/*

HTML minifier module for Nodejs

HTML minification is a process involving striping off unneeded, abundant or otherwise unused HTML attributes and elements without affecting the webpage appearance. minification is useful for statically genetated 'serverless' web application webpages as it delivers easy to digest minimal webpages html-minifier is free and open-source npm nodejs module written in JavaScript for minifying HTML documents, as compared to similar nodejs html minification modules html-minifier proves to be pretty fast, flexible and efficient html-minifier module can be used to minify HTML code programmatically or alternatively you can minify HTML code using global commandline module Npm module Installation To install module in your terminal type npm i html-minifier To install commandline version of html-minifier in your terminal type npm i -g html-minifier usage: html-minifier [options] [files...] options: --remove-comments Strip HTML comments --remove-empty-attributes Remove all attri

Open-graph meta data tags introdution

Open-graph elements are meta tags that contain extra meta data of a webpage open-graph is used by Facebook to create website cards for urls shared on it's platform  cards contain website image thumbnail and as well as title, description, type and url open-graph meta tags have a property attribute containing a property prefixed by "og:" For example; <meta property="og:image"  content="image-url">

Solutions to blocked URLs on Facebook

Facebook allows sharing links on it's platform but in order to protect it's users from accessing URLs containing malicious content Facebook on a regular scraps content on shared URLs effectively raising an alarm if content shared goes against Facebook community standards or in short it's content is forbidden on the Facebook platform, Forbidden content may include potentially malicious crafted phishing websites or websites spreading malware among others Facebook may as well ban content flagged spam either reported or flagged by Facebook moderating algorithm "Facebook sheriff" You wouldn't want to find your website 'spam-list handed' you should avoid that There's no way to redeem a website flagged as a blocked website on Facebook except other non viable alternates such as completely changing your domain or using another sub-domain Facebook may as well ban content containing banned URLs in the meta open graph attributes To resolve open-graph sha