Extraction of structured information from invoice images (2023)

In this blog we will delve into the details of the needinvoice scanningand how we can extract important information from invoice images.

Extraction of structured information from invoice images (1)

· introduction
· registration details
· Implementation
· running the code
· Diploma

About 18 billionaccounts are published each year only in the United States and Europe. Form-type documents, such asaccounts, purchase orders, tax forms, and insurance quotes are common in everyday business, but current techniques to process them still require a lot of manual effort and time or use OCR-based heuristics for extraction. While OCR has been quite successful in digitizing machine-printed text, the cost of these services can multiply exponentially.

Building a program to automate such an application would greatly reduce the money and manpower needed to physically process these invoices, making them more accessible and beneficial to the overall business and demand for these products. Additionally, these automated systems will greatly reduce the manual or human errors that occur.

Handling invoices and streamlining checks to avoid bottlenecks caused by manual processing was one of the main reasons for developing this process.

This easy implementation will help various industries that focus on invoice processing, especially automotive, product based, network and router based, etc. and therefore in any company that needs invoice processing. Invoice processing can be useful for tracking transaction activity and can be a useful fraud detection tool. This app will reduce the waste of resources and it will be a very useful tool.

To create such an application, our first step would be to find a suitable dataset for our application. we use itSROIE registration,que consiste em um conjunto de dados de 1.000 imagens de recibos totalmente digitalizadas e anotações para a competição Receipt Scanned OCR and Key Information Extraction (SROIE).

The record contains multiple images of these invoices along with a JSON file with details about thembounding box and textgifts in this box.

Extraction of structured information from invoice images (2)

CSV data example:


72,25,326,25,326,64,72,64,TAN WOON YANN
50,82,440,82,440,121,50,121,BOOK TA.K(TAMAN DAYA) SDN BND
110.144.383.144.383.163.110.163, Nº 53 55.57 e 59, JALAN SAGU 18,
162,193,334,193,334,211,162,211,81100 JOHOR BAHRU,

JSON data example:

(Video) Extracting Data from Invoices

"Data": "25.12.2018",
"Address": "NR. 53 55, 57 and 59, JALAN SAGU 18,
"total": "9,00"

import libraries

We will use the basic machine learning libraries such asCV2, PandasEtaubalong withjsonLibrary for reading and editing JSON files.SequenceMatcher The library is used for NLP match sequencing that useslongest contiguous matching subsequence (LCS).Other libraries are the usual support libraries used in building Python projects.

import globs
import json
randomly import
from pathlib import path
de difflib import SequenceMatcher

import cv2
import pandas as pd
import numpy as np
from PIL import image
from tqdm import tqdm
from IPython.display import display
import matplotlib
from matplotlib import pyplot, patches

Pre-process data set

Only a few pre-processes are needed before we can feed them into our model for training:

Reading the words and bounding boxes— For each path file we iterate through each line that has the format [boundary coordinates, text], divide each line into parts by ' , ' and extract the 2 corners of each box (since we only need 2 points to draw a rectangle) and the associated text. We convert and return all information from each image to a pandas dataframe.

def readboxandwords(path: path):
bbox_and_words_list = []

mit open(path, 'r', errors='ignore') als f:
for line in f.read().splitlines():
if len(line) == 0:

split_lines = line.split(",")

bbox = np.array(split_lines[0:8], dtype=np.int32)
text = ",".join(split_lines[8:])

bbox_and_words_list.append([caminho.tronco, bbox, texto])

dataframe = pd.DataFrame(bbox_and_words_list, column=['filename', 'x0', 'y0', 'x1', 'y1', 'x2', 'y2', 'x3', 'y3', 'linha' ], dtype=np.int16)
dataframe = dataframe.drop(columns=['x1', 'y1', 'x3', 'y3'])

Return data frame

Assign labels to words— The image dataset provided did not contain any classification on the data type of the individual bounding box information contained within it, ie H. if the information provided pertains to date, address, company name, etc. in this context (we only receive the company name, full address, date and price separately). Then we try to map each entity to one of the 4 options given in the JSON file comparing it to the original data.

def assign_line_label(line: str, entities: pd.DataFrame):
line_set = line.replace(",", "").strip().split()
for i, column in enumerate(entities):
entity_values ​​= entities.iloc[0, i].replace(",", "").strip()
entity_set = entity_values.split()

match_count = 0
for l in line_set:
if any(SequenceMatcher(a=l, b=b).ratio() > 0,8 für b in entity_set):
match_count += 1

if (column.upper() == 'ADDRESS' and (matches_count / len(line_set)) >= 0.5) oder \
(column.upper() != 'ADDRESS' und (matches_count == len(line_set))) oder \
match_count == len(entity_set):

return "O".

def Assign_Labels(words, entities):
max_area = {"TOTAL": (0, -1), "DATA": (0, -1)}
already_marked = {"TOTAL": False,
"DATA": False,
"Wrong address,
"COMPANY": Wrong,
"The wrong

Tags = []
for i, line in enumerate(words['line']):
label = assign_line_label(line, entities)

already_checked[label] = True
if (label == "ADDRESS" and already_labeled["TOTAL"]) or \
(label == "COMPANY" and (already_labeled["DATE"] or already_labeled["TOTAL"])):
Tag = "O"

# Assign the largest bounding box
if label in ["TOTAL", "DATE"]:
x0_loc = words.columns.get_loc("x0")
bbox = words.iloc[i, x0_loc:x0_loc+4].to_list()
Fläche = (bbox[2] - bbox[0]) + (bbox[3] - bbox[1])

if max_area[label][0] < área:
max_area[label] = (area, i)

(Video) OCR Invoice Processing, invoice data extraction

Tag = "O"


label[max_area["DATUM"][1]] = "DATUM"
label[max_area["TOTAL"][1]] = "TOTAL"

words["label"] = labels
answer words

Create dataset module— Finally, we wrap the dataset in a module that contains all the necessary information in a predefined standardized form to easily train the model. We've also added the tqdm library to display a progress bar to make it easier to see the progress of the model.

def dataset_creator(folder: path):
bbox_folder = Order/'Box'
folder_entity = folder/'entities'
img_folder = Order/'img'

entity_files = entity_folders.glob("*.txt")
bbox_files =bbox_folder.glob("*.txt")
img_files = img_folder.glob("*.jpg")

data = []

print("Reading the record:")
para bbox_file, entity_file, img_file em tqdm(zip(bbox_files, entity_files, img_files), total=len(bbox_files)):
bbox = read_bbox_and_words(bbox_file)
Entidades = read_entities(entities_file)
imagem = Image.open(img_file)

bbox_labeled = assign_labels(bbox, entidades)
der bbox

new_bbox_l = []
for index, line in bbox_labeled.iterrows():
new_bbox_l += split_line(line)
new_bbox = pd.DataFrame (new_bbox_l, coluna=bbox_labeled.columns, dtype=np.int16)
del bbox_labeled

#Additional label assignment to increase label accuracy (can be omitted)

for index, line in new_bbox.iterrows():
label = line['label']

if label !="O":
entity_values ​​= entity.iloc[0, entity.columns.get_loc(label.lower())]
entity_set = entity_values.split()

if any(SequenceMatcher(a=row['line'], b=b).ratio() > 0,7 para b em entity_set):
designation = "S-" + designation
Tag = "O"

new_bbox.at[index, 'label'] = label

(Video) Invoice Data Extraction | Table Data Extraction | Image to Table API | Flask

width, height = image size

data.append([new_bbox, width, height])

return data

Import the LayoutLM template

OLayoutLMmodel is a simple but effective text and layout pre-training method for understanding document images and information extraction tasks, therefore perfect for processing semi-structured invoice images. This model is available atMicrosoftembraced face library, so we can install or use it directly from huggingface. We will use the previous method.

Git-Clone https://github.com/microsoft/unilm.git
cd unilm/layoutlm/obsolete
Install the tube.

Next, we load the basic LayoutLM model, as our usage is minimal and wouldn't require much of a high model size.

pretrained_model_folder_input= sroie_folder_path / Path('layoutlm-base-uncased')

label_file=path(dataset_directory, "labels.txt")

train the model

We need to run the files - run_seq_labeling.py to train the model while providing all the necessary information.

python run_seq_labeling.py \
--data_dir /dataset \
--labels /dataset/labels.txt \
--model_name_or_path "{pasta_do_modelo_pretreinado}" \
--model_type layoutlm \
--max_seq_length 512 \
--do_lower_case \
--do_train \ #By specifying do_train, we tell the program to train
--num_train_epochs 10 \
--logging_steps 10 \
--save_steps -1 \
--output_dir output \
--overwrite_output_dir \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 16
# Evaluate against the test set and make predictions
! python run_seq_labeling.py \
--data_dir /dataset \
--labels record/labels.txt \
--model_name_or_path "{pasta_do_modelo_pretreinado}" \
--model_type layoutlm \
--do_lower_case \
--max_seq_length 512 \
--do_predict \ #replacing do_train with do_predict, we can instruct the model to make predictions on the given dataset
--logging_steps 10 \
--save_steps -1 \
--output_dir output \
--per_gpu_eval_batch_size 8

testing the model

We can read them nowexpected datesand images of the bounding box and write it down on the invoice image and compare it to the oneoriginal dataon the invoice image (bounding boxes have been noted) to see how the image was trained.

Extraction of structured information from invoice images (3)
Extraction of structured information from invoice images (4)
Extraction of structured information from invoice images (5)

From the above results, we can clearly see that the model is predicting very close to the land value. There are certain instances of discrepancy like the last set of images where the model fails to identify the total cost.

(Video) Document Understanding: Processing and Extracting Data From Multiple Invoice Of Different Formats

With more hyperparameter optimization and fine-tuning of LayoutLM, the accuracy can be improved and the model can be trained to recognize many more classes than 4 very quickly with just the basic boxless model.


The link above takes you to our code implementation designed to run independently.

Step 0: Registerto kaggeln to be able to activate gpu.

Step 1:After opening the link, click Edit in the top right corner.

step 2: Click on the 3 dots in the upper right corner of the screen and go to Accelerator and select any GPU.

Extraction of structured information from invoice images (6)

Stage 3:Press Run All.

Step 4: Once executed, you can scroll down to the cell shown below and test the different outputs.

Extraction of structured information from invoice images (7)

Note that it may take some time to run as the model needs to be loaded and trained.

Invoice processing is challenging when done manually, but with our design we are able to significantly reduce the manual checking and processing required while improving and supporting industry standards.

I want to thank my friendAbdu Rehman PS,who created this project with me for the Py.Hack hackathon hosted by Cohesive at RVCE.

(Video) Invoice Data Extraction (Headers + Line Items) with DocDigitizer


How do I extract information from an image? ›

How to Extract Data from Image
  1. Open Image File. Launch PDFelement on your PC to access the Home window. ...
  2. OCR PDF Image. When the image file loads on the PDFelement, head to the toolbar and click the “Tool” tab. ...
  3. Extract Data from Image. ...
  4. Choose Extraction Mode. ...
  5. Save Extract Data.

What is invoice data extraction? ›

Automated invoice processing is the process of extracting key accounts payable fields from billing account documents. Extracted data includes line items from invoices integrated with your accounts payable (AP) workflows for reviews and payments.

Which technology is used to extract data from the scanned invoices? ›

OCR means Optical Character Recognition technology. It is a software tool which is helpful in Invoice processing, that takes paper document like scanned invoice, It is mainly used to extract invoice fields data.

How do I retrieve structured data? ›

You can obtain the structured data extracted to your own database via API. You can use Octoparse to extract structured data from web pages on websites such as e-commerce sites like Amazon and eBay, or popular news websites like Yahoo Finance and The Washington Post.

What is the fastest way to extract text from an image? ›

To extract the text from an image,
  1. Go to imagetotext.info (Free).
  2. Upload or drag and drop your image.
  3. Click the Submit button.
  4. Copy the text or save the text file on your computer.

What are the three data extraction methods? ›

There are three main types of data extraction in ETL: full extraction, incremental stream extraction, and incremental batch extraction. Full extraction involves extracting all the data from the source system and loading it into the target system.

Which tool is used for data extraction? ›

The top data extraction tools are Nanonets, E-Commerce Scraper API, Import.io, Web scrapper, Hive Data, DocParser, Octoparse, ParseHub, and Mailparser. Data extraction tools are software programs that help people quickly and easily gather data from a variety of sources, such as websites or databases.

What are the different data extraction methods? ›

In terms of Extraction Methods, there are two options – Logical and Physical. Logical Extraction also has two options - Full Extraction and Incremental Extraction. All data is extracted directly from the source system at once.

What is OCR data extraction? ›

Optical character recognition (OCR) technology is a business solution for automating data extraction from printed or written text from a scanned document or image file and then converting the text into a machine-readable form to be used for data processing like editing or searching.

What are the 4 types of scanning? ›

The information will include; cost, and how its used The four common scanner types are: Flatbed, Sheet-fed, Handheld, and Drum scanners.

Which device is used to extract the text from the scanned image? ›

Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) ...

Is invoice a structured data? ›

Semi-structured documents are also widely used. Examples include: Invoices.

How to convert raw data into structured data? ›

Steps to perform Unstructured Data to Structured Data Conversion?
  1. Step 1: Cleaning the Unstructured Data.
  2. Step 2: Check to see if it should be kept or deleted.
  3. Step 3: Choose the technology for data collection and storage based on company requirements.
  4. Step 4: Entity Extraction.
  5. Step 5: Create a pattern.
Jul 1, 2022

How is structured data stored and accessed? ›

Structured data is commonly stored in data warehouses and unstructured data is stored in data lakes. Both have cloud-use potential, but structured data allows for less storage space and unstructured data requires more.

What scraping method is used to extract data from images? ›

Optical Character Recognition or OCR is a technology that enables us to extract data from an image, PDF file, scanned document, etc., and paste it into a document (like MS Word), where we can then edit it directly.

Is there a way to extract text from an image? ›

Extract text from images on Android. There are many apps for Android that let you convert images to text. Not only that, but you can also scan text on the go as all Android phones have built-in cameras. Text Scanner is my favorite Android OCR app as it lets you extract text from images offline.

What is the best way to convert image to text? ›

Step 2: Convert the file
  1. On your computer, go to Google Drive.
  2. Right click a file.
  3. Click Open with. Google Docs.
  4. The image file is converted, but the format might not transfer: Bold, italics, font size, font type, and line breaks are likely to be retained.

Which activity is used for extraction of text from image? ›

Optical character recognition (OCR) is usually referred to as an off-line character recognition process to mean that the system scans and recognizes static images of the characters [1].

What is image text extraction? ›

Get the information you need quickly. Text extraction tools use machine algorithms to recognize and structure text from image files. The tools identify the shapes of letters on the image and reconstruct it as text you can select, copy-paste, and even edit.

Which app can extract words from image? ›

OCR-Text Scanner is app to recognize any text from an image with 98% to 100% accuracy. Gave support for 92 languages. Here OCR (Optical Character Recognition) technology is used to recognize text on image.

What is the simplest and most commonly used extraction method? ›

Extraction methods include solvent extraction, distillation method, pressing and sublimation according to the extraction principle. Solvent extraction is the most widely used method.

What are the 4 steps of the extraction process? ›

The extraction of natural products progresses through the following stages: (1) the solvent penetrates into the solid matrix; (2) the solute dissolves in the solvents; (3) the solute is diffused out of the solid matrix; (4) the extracted solutes are collected.

What are the 5 method of extraction? ›

Extraction techniques
  • Solvent extraction. ...
  • Microwave-assisted extraction. ...
  • Ultrasound-assisted extraction. ...
  • Supercritical fluid extraction. ...
  • Ionic liquids. ...
  • Enzyme-assisted extraction. ...
  • Pressurized liquid/fluid extraction.
Aug 11, 2014

Is Excel a data extraction tool? ›

It's no surprise that many businesses use Excel as a data extraction tool to extract data. In order to extract data from Excel columns, you can use some combination of the VLOOKUP, MATCH, and INDEX functions.

What would be the most advantageous way to extract data? ›

The most efficient method for extracting data is a process called ETL. Short for “extract, transform, load,” ETL tools pull data from the various platforms you use and prepare it for analysis. The only alternative to ETL is manual data entry — which can take literal months, even with an enterprise amount of manpower.

What is the best tool to collect data? ›

Top 12 Data Collection Tools of 2023
  • Magpi.
  • Jotform.
  • FastField.
  • Zonka Feedback.
  • Forms on Fire.
  • GoSpotCheck.
  • Zoho.
  • Paperform.
Jan 2, 2023

What is information extraction techniques? ›

Information extraction is the process of extracting information from unstructured textual sources to enable finding entities as well as classifying and storing them in a database.

What are the examples of data extraction? ›

An example is the extraction of phone numbers from a digital directory which is already organized based on a logical scheme. Data that is stored in a structured format such as a relational database management system (RDBMS) is easy to extract using tools such as Structured Query Language (SQL).

What is simple extraction method? ›

A simple extraction is performed on a tooth that is above the gumline and can be seen in the mouth. These teeth can typically be removed easily by loosening the tooth with a lifter or elevator and pulled out with forceps. This is a quick process, and the healing time is usually pretty fast.

What is the difference between OCR and OCR? ›

Optically clear adhesive (OCA) is characterized by its excellent applicability as well as adhesivity in its solid form, while the optically clear resin (OCR) is effective in eliminating fine layers of air gap in its liquid form.

What is the best OCR method? ›

The 7 best OCR software are Nanonets, ReadIRIS, ABBYY FineReader, Kofax OmniPage, Adobe Acrobat Pro DC, Tesseract, and SimpleOCR.

What is a disadvantage of an OCR? ›

One of the main disadvantages of optical character recognition is that it can be inaccurate. This is because OCR technology is not 100% accurate, and it can sometimes make mistakes when converting images to text. For example, OCR might mistake a lowercase “l” for a “1”, or a “b” for an “8”.

What is OCR scanning? ›

Optical Character Recognition (OCR) is the process that converts an image of text into a machine-readable text format. For example, if you scan a form or a receipt, your computer saves the scan as an image file. You cannot use a text editor to edit, search, or count the words in the image file.

What are three good scanning practices? ›

Depending on network connections, software versions, and processing speed, please consider the following guidelines to make that process even more efficient.
  • Divide very large documents into smaller files. ...
  • Select the lowest resolution possible: ...
  • Be sure all your scanning choices keep file size to a minimum:

What are the 3 levels of scanning? ›

There are three major categories or levels of scanning: patient, encounter and order level.

How do I extract information from a scanned document? ›

To accomplish this task, good Optical Character Recognition (OCR) is needed. Often a combination of pattern recognition and advanced Zonal OCR is used to identify the data to be extracted. Once the extraction software is trained, it can convert batches of scanned files into excel sheets.

How do I read data from a scanned image? ›

Extract text from PDF/Images with Optical Character Recognition(OCR) OCR technology helps scan a document, regardless of whether it is made of text or images, for signs of text. It uses pattern recognition algorithms to recognize whether any part of a document might be an alphabet, number, or character.

How do I copy data from a scanned image? ›

Drag to select text, or click to select an image. Right-click the selected item, and choose Copy. The content is copied to the clipboard. In an another application, choose Edit > Paste to paste the copied content.

What is the structure of an invoice? ›

▢ The invoice number and invoice date. ▢ Description of services rendered. ▢ Subtotal for each service (including hourly rates or quantities used) ▢ Total amount due (including sales tax, fees, and discounts) ▢ The payment due date.

What are the 2 formats of structured data? ›

Structured Data
  • Continuous — Data that can undertake any value in an interval. For example, the speed of a car, heart rate, etc.
  • Discrete — Data that can undertake only integer values, such as counts. For example, the number of heads in 20 flips of a coin.
Nov 14, 2020

How to extract data from raw data? ›

There are three steps in the ETL process: Extraction: Data is taken from one or more sources or systems. The extraction locates and identifies relevant data, then prepares it for processing or transformation. Extraction allows many different kinds of data to be combined and ultimately mined for business intelligence.

Is structured data easy to process? ›

Structured data is usually easier to search and use, while unstructured data involves more complex search and analysis. Unstructured data requires processing to understand it, such as stacking before placing it in a relational database. Structured data is older, so there are more analytics tools available.

Is structured data easy to analyze? ›

Structured data is easy to search and analyze, while unstructured data requires more work to process and understand. Structured data exists in predefined formats, while unstructured data is in a variety of formats.

Which two sources typically store structured data? ›

In addition to relational databases, spreadsheets are also common sources of structured data. Whether it's a complex SQL database or an Excel spreadsheet, because structured data depends on you creating a data model, you must plan for how you will capture, store and access data.

What data tools are used for structured data? ›

Merkle Structured Data Tool

When I need to quickly generate some basic structured data such as organization or FAQs, Merkle's structured data tool is ideal. You can quickly find basic structured data types and generate schema for breadcrumb, event, how-to, or job posting, to name a few.

Is image a structured data? ›

Examples of unstructured data include audio, video, images, and all manner of text: reports, emails, social media posts, etc.

How can I read text data from an image? ›

The easiest method is to use a PDF editing application. Many modern apps have OCR features and can read through image files in seconds. Another workable option is converting an image to a PDF. Some PDF converters have OCR functionality and can also read and convert text.

Can you take a picture of something and identify it? ›

Users take a photo of a physical object, and Google searches and retrieves information about the image. The Google Goggles mobile app can: Recognize and offer information for historical landmarks.

Can Google extract text from an image? ›

Img to Docs allows you to quickly and easily convert images to text within a Google Doc. Simply drag and drop your image or click to upload and watch as Optical Character Recognition (OCR) is automatically applied to extract your text.

What software can read text from image? ›

OneNote supports Optical Character Recognition (OCR), a tool that lets you copy text from a picture or file printout and paste it in your notes so you can make changes to the words. It's a great way to do things like copy info from a business card you've scanned into OneNote.

Does OCR work on images? ›

Optical character recognition (OCR) is a technology that extracts text from images. It scans GIF, JPG, PNG, and TIFF images. If you turn it on, the extracted text is then subject to any content compliance or objectionable content rules you set up for Gmail messages.

Is there an app that reads text from pictures? ›

OCR-Text Scanner is app to recognize any text from an image with 98% to 100% accuracy. Gave support for 92 languages. Here OCR (Optical Character Recognition) technology is used to recognize text on image.

What tool extracts metadata from images? ›

Metadata2Go.com is a free online tool that allows you to access the hidden exif & meta data of your files. Just drag & drop or upload an image, document, video, audio or even e-book file. We will show you all metadata hidden inside the file!

Which tool can be used to extract metadata? ›

The metadata extraction tool, “Meta-Extractor,” was developed by the National Library of New Zealand to programmatically extract metadata from a range of file formats, including PDF documents, image files, sound files, and Microsoft office documents, among others.

What are the 4 metadata of a photo? ›

Types of image metadata

Let's dig into each type. Technical metadata is likely to be automatically generated either by a camera or file source. This includes information like file type, file size, camera settings, date created, and uploader credentials.

Is there an app to take a picture of something to find its value? ›

The Coinoscope mobile app makes coin identification and valuation easy - just snap a picture of a coin with your phone camera and the app will show you a list of similar coins. It is fast and accurate. It is a must-have tool for every coin collector and numismatist!

Is there an app to identify objects in pictures? ›

The Google Lens App is a powerful image recognition tool that allows users to search for information about objects captured in photos. The app can identify landmarks, plants, and animals and provide information about products and businesses.

Is there a way to take a picture of something and find out what it is on Iphone? ›

With Visual Look Up, you can identify and learn about popular landmarks, statues, art, plants, pets, and more that appear in your photos in the Photos app . Visual Look Up is available on supported models.


1. Information Extraction from Unstructured Invoices
(DataFest Yerevan)
2. [15] Use Python to extract invoice lines from a semistructured PDF AP Report
(Pythonic Accountant)
3. INVOICE EXTRACTION | Zapbot : Extract data from invoices - fast, easy, and superb accuracy!
(Zapbot Automation)
4. Extract Key Information from Documents using LayoutLM | LayoutLM Fine-tuning | Deep Learning
(Karndeep Singh)
5. Table Detection in Document Images such as Invoices, Bank Statements using Python
(Bhavesh Bhatt)
6. Line Item Data Extraction from Bills, Invoices & Receipts in 3 seconds [How to] #reMARS


Top Articles
Latest Posts
Article information

Author: Van Hayes

Last Updated: 06/04/2023

Views: 6126

Rating: 4.6 / 5 (66 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Van Hayes

Birthday: 1994-06-07

Address: 2004 Kling Rapid, New Destiny, MT 64658-2367

Phone: +512425013758

Job: National Farming Director

Hobby: Reading, Polo, Genealogy, amateur radio, Scouting, Stand-up comedy, Cryptography

Introduction: My name is Van Hayes, I am a thankful, friendly, smiling, calm, powerful, fine, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.