All Categories

  • Fix Google Crawl Errors 404

    1/1/2016 -
    404 is normal Crawl Errors sometime show 404 because, you page is not exist anymore, or other website point to your page but your page is not exist, or you change the domain and change the folder of your url.  You could look the Crawl Errors list and redirect them all as 404, but depend on the page I think it might could drive the traffic to the site, so I would redirect it to the parents page. For example, my blog usually start from /blog/Post/{then the page}, however I found out there're lots of 404 as /Home/Blog/{page url} or /mobile/blog/post/{page url}. So I direct them as to the parents page. Following is using node.js. app.get('/home/blogs/*',; app.get('/mobile/blog/post/:post', routes.views.
  • Troubleshoot IIS - Service Not Available 503 Issue

    1/1/2016 -
  • Code Snippet - Use T-SQL to Search Table Name by Column Name

    1/1/2016 -
    SQL example to search table name in case if you only know the column name SELECT AS 'TableName', AS 'ColumnName' FROM sys.columns c JOIN sys.tables t ON c.object_id = t.object_id WHERE c.
  • Use IIS Url Rewrite Module to Redirect from Http to Https

    1/1/2016 -
    How to use IIS Rewrite module for redirect from http to https.
  • C# RSS Reader Publish Date Issue

    1/1/2016 -
    Issue: Use C# SyndicationFeed to read rss sometime throw exception Why? Reading rss feed should be simple, W3C define the format for RSS, everyone also should be just following the same format and create library for it. You can use the W3C RSS validator tool for check your own RSS feed or not. Now, following is about how to read rss page by C#. C# provide the SyndicationFeed for communicte with RSS. However I've seen sometime C# code will throw exception depend on what value of RSS Feed Item's Publish Date, following is a example will throw exception when reading the RSS. Le'ts say you have following type of rss feed need to read. You could get the RSS from this url http://www.asp.
  • HTML i Tag Closing Tag

    1/1/2016 -
    I thought I understand most of HTML, but look following, when you write like this. <div> <h1> hi </h1> <i class="myclass" /> </div> The code will output to the browser to the following, the i need a closing tag </i>. <body> <div> <h1> hi </h1> <i class="myclass"> </i></div><i class="myclass"> <script> // tell the embed parent frame the height of the content if (window.parent && window.parent.parent){ window.parent.parent.postMessage(["resultsFrame", { height: document.body.getBoundingClientRect().height, slug: "None" }], "*") } </script> </i></body> How about this HTML..
  • iOS Development Information

  • SEO - Robots and Sitemap.xml

    1/8/2016 -
    robots.txt Easiest improvement for robots is add file robots.txt and following content. It means allow any robots engine to crawl your site, and don't index your /keystone and /admin root folder. One of the reason you do that is then these engine won't check your page so first your page speed is better. User-agent: * Disallow: /keystone Disallow: /adminMore information about robots, see Create robots.txt file or robots database sitemap.xml sitemap.xml is another important thing to improve google crawl your website, I'm using following to auto generate sitemap.
  • Stop Using Visual Studio 2013 with Node.js Project

    1/1/2016 -
    Stop using Visual Studio with Node.
    Node.jsVisual Studio