• ## Bainite

It's not pearlite or martensite. A blog written by Mathew Peet.
• ## Recent Comments

 Mathew on Strength of fibres {FAROUK Dehmchi on Strength of fibres Mathew on SpaceX discover wonders of ste…

## Density of Iron Carbide (Fe3C)

There are 4 molecules of Fe3C per unit cell. The cell is orthorombic, typical lattice parameters at room temperature are given below from Mehl etal, Trans AIMME, 1933.

Mass
——-

m = mass of 1 mole of unit cells
= 12 x FeRAM + 4 x CRAM
= 12 x 55.845 + 4 x 12.011 = 670.14 + 48.044
= 718.184 g/mol

Fe3C unit cell is 718.184 g/mol

Volume
——-
v = volume of 1 mole of unit cells
= AV x a x b x c
= 6.0221415 × 1023 x 4.518 x 5.069 x 6.736 x 10-30
= 9.2901×10-5 m3

Density
——-
d = density
= m/v
= 0.71814 kg / 9.2901 x 10-5 m3
= 7730.14 kg/m3

## Steel Poll 2

To maximise clear up any confusion I bring you the second poll. Please justify your selection in the comments section.

Second Steel Poll

Second Steel Pole

## Steel Poll

I created a poll so that we can decide once and for all which is the best phase of steel. You can see the poll below, next to the pole. Please feel free to justify your answer below.

 Take Our Poll Steel Poll Steel Pole

## Note for Rupert Murdoch

According to the BBC, Rupert Murdoch will try to block google from using `news’ content from his companies. As you can read here, he can already do this easily by just requesting that google remove his websites from their news index by using the google news opt-out form. Even better he can create a robots.txt file on each of his webservers to prevent indexing of his site by any other webcrawler which respects the `Robots Exclusion Protocol’.

Another alternative is to use a header like this on each page he doesn’t want to be indexed:

```<html>
<title>Faux News</title>
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
```

Interestingly a number of newspapers think that the extra traffic sent there way by google is a problem, as you can read in the blog post I already linked to once above. I think they have good cause for concern, but I don’t really know what they could do about it, unless they want to generate better content than other indexed websites. I think for these traditional media once they go online it will be really difficult for them to keep there customers, and that is probably why they want to go for a closed-content model. The trick is that they have to prevent all the news from being available online for that to work, otherwise people will see the competitor content too, or again generate something that people really want to read.

Google no doubt already pointed this out to the newspapers many times, as you can see for example, here.

## Materials In action: Plane wings

This video on youtube shows very clearly the large deflections of the wings as the Boeing 747-400 is maneuvering during landing.

## View FCC Austenite in three dimensions

Channel 4 seem to be showing some 3d programs on t.v. the consequence is you can get 3d glasses for free in sainsburys, and you can use them with the `Jmol` sorfware to view molecular models in three dimensions.

2 FCC unit cells (click image to enlarge)

## check_html_recursive script to check website html validity

I made a dodgy bash script to check the validity of the web pages on my site http://mathewpeet.org. There is probably a better way to do this since the validator is open source, but I couldn’t see how it worked easily, any advice there welcome.

The script makes a list of directories under the root directory specified and then checks index.html in each directory according to w3.org validator. Status of each file is printed to screen, and error reports are sent to a readable text file.

These are the contents of the file `check_html_recursive` ;

```#!/bin/bash
# Copyright Mathew Peet 2009, please use and modify
# but leave some credit
#This script checks if the pages are valid html or not, and puts errors in ~/bin/errors.txt

myfiles=`find ~/www/mathewpeet.org/ -name 'content' -exec dirname {} \;`
#myfiles="/home/user/public_html/"
for x in \$myfiles
do
y=`echo \${x:32}`       #takes sting after nth character
echo checking index in \$y directory
`w3m -dump http://validator.w3.org/check?uri=http://mathewpeet.org/\$y/ > ~/b
in/temp.txt`
popo=`grep "as XHTML 1.1!" ~/bin/temp.txt`
echo \$popo
opop='Errors found while checking this document as XHTML 1.1!'
if [ "\$popo" != "\$opop" ]; then
echo "ok?!"
else
`cat ~/bin/temp.txt >> ~/bin/errors.txt`
echo "\$y/index.html does not validate as XHTML 1.1"
fi
echo ""
done
echo "Any reported errors written to ~/bin/errors.txt (hopefully)"
echo "remove temp.txt"
```