Based on some new issues brought up by Morgan, I have simplified the regex for finding images (and I think removed a bug) and added “jpeg” to the list of valid image extensions.
Blogger Image Import has finally been updated after years of neglect. Sailorcurt convinced me to apply all of the user fixes to the plugin and get it working again. I have also applied a few changes of my own to support multiple configurable host domains and image formats.
In theory it could be used for any hosting domain but it has not been tested outside of Blogger.
I recently resurrected a C# project I was working on some time ago that I had almost forgotten. I was reminded, again, of a computer game I used to play in school called “Rocky’s Boots“. The last time I thought of it I decided to make a framework for building something like it in C#. This time I was reminded that I had already built it. Fortunately, I was able to find it.
The game was one where you connect components together to “kick” blocks of certain shapes and colours. I didn’t realize until I was much older that those components were basic logic gates and we were essentially programming our “kicker”.
To that end I built a tool in C# for connecting basic components together to form complicated circuits. It has the basic AND,OR,NOT and a few other gates along with wires to connect their inputs and outputs. The circuits can be saved out to files, and loaded in as individual components which can be duplicated and further connected to make more complex circuits. There is no kicking, nor is there any game to this, it is just the simulator, but it is fun to build circuits that do something.
Using those components I have built other things like NOR, XOR and Latches and gone so far as to build an 8 bit counter, using 2x4bit counters which are made of 4x1bit counters
which are made with latches for storage which are made with SR Flip Flops – all built out of AND,NAND,NOT,OR.
There is no documentation yet so… here is the documentation:
- New items are added to the top left of the working area – move them by clicking and dragging
- Power items can be either ON or Strobing on and off.
- Bulbs just glow red when powered.
- The inputs are generally on the left side of each item, with the exception of things like the “Display” items that have a latch input in the top right corner (they will only update their display when this is set).
- You can save and load circuits that you have made
- You can create new user defined “Gates” by saving a circuit with at least one input and one output and importing it under the “
Gates->User Defined...” tool. Inputs are defined by Power sources and outputs are defined by Bulbs. Load some of the examples, like
components\NOT.xmlfor examples of user-defined components.
Here is the application. Some of the sample files are included with it that you should be able to load.
Perhaps I will make this open source at some point. I see no reason not to give away the source, but it isn’t quite ready for prime time yet. I would like to make the core components plugin-able – perhaps even being written in an interpreted language – to allow some unique components without rewriting the application.
I fixed an issue reported here where the loading gif was not disappearing.
- Fixed problem where spaces around the loading GIF caused it to not stop when the calendar loads
There were also a few changes in 1.3 since the last post.
- Remove duplicate events when showing multiple calendars that have been invited to the same event. If you create an event in calendar A and invite calendar B as a guest, then load them as “url” and “url2″, the event should only appear once.
- Added “Event Title Format” option to specify a format string to customize event titles (with or without the time).
- Added error checking for errors that can occur when used offline (for test servers).
- Changed the layout of the widget settings to increase the size of the text boxes.
I have recently noticed that a lot of the relevant search queries that reference this site find things like the categories and archives instead of the specific posts that contains the relevant content.
It makes the search results look dirty and disorganized and means that there is duplicate content in different pages, which is what is confusing the search engines.
Doing a bit of digging found a few ways to help direct the search engines to index the content I wish they would, instead of what they choose.
One quick way of preventing Google and other search engines from indexing a site is by adding a robots.txt file in the root directory of the site. This file contains instructions for “well behaved” search engine crawlers.
The first section is defined for all robots agents and blocks access to private WordPress directories as well as virtual paths that we don’t want indexed, such as the RSS feeds and categories.
The second section allows the Mediapartners-Google robot full access to the site. This is the robot used by adsense, so that any page serving ads will get indexed for keyword context matching. Without this, adsense will not be able to review the contents of the page to help match ads.
The last line “Sitemap:” identifies the sitemap built by the XML-Sitemap plugin.
The <meta> keyword in the head of a page can be used to help robots determine, dynamically, what to do with a page. I use this, rather than the robots.txt for the archives since the format of the archive page names is somewhat dynamic (if I were to change it, then I would have to update the robots.txt)
Instead of just blocking archives, I chose to block anything that is not a page, a post, or the homepage with the following code in my header.php of my theme.
Insert the following in the <head> tag in header.php.
If the blog page is NOT a single, or a page, or the homepage OR it is a paged file, then block it from being indexed and archived by search engines, but allow them to follow the links to other pages.
I chose to block the is_paged() (things like the previous pages from the homepage like /page/2) pages this way instead of through the robots.txt so that they would get “followed”. Anything excluded in robots.txt is, in theory, never loaded by a search engine robot, so they cannot follow any links in that page. I’m not sure this is strictly necessary, since those links should all be available by following the links through the posts.
This <meta> tag will also block category pages, so the /category exclusion in the robots.txt is not strictly necessary.
I’m not clear on exactly how the adsense robot treats the <meta> tag, but it seems like they might be blocked too. We will have to see how this plays out.
Another feature I discovered, and like, is to reverse the title of the pages. By default, my theme was making hierarchical names starting with “Notions” on the left, and the post name on the right. I switched them so the article was on the left, and the blog name is on the right. It makes the most significant thing, the page subject, the first thing you read
So now this entry is titled
“WordPress Blog Search Engine Optimization « Notions”
“Notions » WordPress Blog Search Engine Optimization”
The following was inserted into the <head> tag in header.php and replaces any other reference to <title>
To block categories or not to block categories?
The reason to block real pages such as the feeds and categories is to prevent duplicate content from being indexed. Category and Archive pages contain copies of original posts, which they should do, but it confuses search engines as they see the same content on your site. By blocking the extra copies it makes it more obvious to the search engine where the real content is, what to index, and will send users to the real pages (with comments, etc.) rather than an archive page.
I debated for a while whether I should block the category pages, since they do provide a service for users searching for things related to those categories, and group things all together. In the end I decided that it was still not worth the confusion of the extra pages. An alternative would be to block regular posts, and only allow the categories to be index, as they may provide more keywords to search and contain more relevant content.