Category: Uncategorized

Writing IEnumerable data to CSV/Excel as a table (HuLib 1.0.7 feature)

A common requirement for us is exporting data to excel or CSV files. While it is not too daunting of a task, the frequency of it prompted me to look at a more concise way of writing it.

Based on the established practices we first get all the data we need to export into some form of IEnumerable and then have a function to export it to the desired target.

My previous approach was to use one of our writer classes (ie BufferedExcelWriter or CSVWriter to write each field one by one. This is quick to write and looks like this:

  1. Write each header one by one with .Write(“header name”)
  2. Loop through rows of the IEnumerable
    1. For each of the columns, we need to write the value
    2. Write a newline

So, this is not too difficult to implement but you end up with many lines and the code for the headers is separate from that of the data in the table. This can lead to a bit more complication in the event we need to rearrange, add/remove columns since it becomes quite easy to accidentally misalign columns.

I thought it would be nice if instead, we could just have a single call that includes defining the columns and what data they should have rather than directly iterating through them. This may not be the best way to go in all cases – you are omitting looping in favour of just using a single function to get each value, but for our exports, it usually satisfies our requirements.


Say we have the following model class:

class TestData
	public string Name { get; set; }
	public string Description { get; set; }
	public decimal Amount { get; set; }

	public TestData(string name, string description, decimal amount)
		Name = name;
		Description = description;
		Amount = amount;

We want to export all 3 properties per line.

The old approach would look like this (for Excel):

BufferedExcelWriter writer = new BufferedExcelWriter();
foreach (TestData data in testData)

writer.Save("New File.xlsx");

This works and gives us what we want but can get tedious.

The new approach makes use of some new things in HuLib, namely ExportTableBuilder:

BufferedExcelWriter writer = new BufferedExcelWriter();

writer.WriteTable(testData, new ExportTableBuilder()
	.AddColumn("Name", d => d.Name)
	.AddColumn("Description", d => d.Description)
	.AddColumn("Amount", d => d.Amount));
writer.Save("New File.xlsx");

Now we have all of the writing content to the file down to a single call! You can see that it is very clear which header corresponds to which value, and it is impossible to misalign the header to the value.

Seeing that this would reduce the number of calls for generating content to 1 in most cases I took it one step further and added extensions to CSV and Excel to export to a file directly:

testData.ToExcel("New File.xlsx", new ExportTableBuilder()
	.AddColumn("Name", d => d.Name)
	.AddColumn("Description", d => d.Description)
	.AddColumn("Amount", d => d.Amount));

Now you can get a list of objects into a CSV / Excel file without worrying about which class to use.

Note: there is currently no support for formatting (in Excel)

Functions of note:

// Some list of whatever model you want
List testData = new List()
	new TestData("Apple", "A fruit", 30),
	new TestData("Banana", "Also a fruit", 22),
	new TestData("Flamingo", "Not a fruit", 4),

// Table definition
ExportTableBuilder builder = new ExportTableBuilder()
	.AddColumn("Name", d => d.Name)
	.AddColumn("Description", d => d.Description)
	.AddColumn("Amount", d => d.Amount);

// Export to Excel
Excel.Export(testData, "New File.xlsx", builder);
testData.ToExcel("New File.xlsx", builder); // This and the line above are equivalent

// Export to CSV
CSV.Export(testData, "New File.csv", builder);
testData.ToCSV("New File.csv", builder); // This and the line above are equivalent

These functions are made available via an extension to the IWriter interface which both BufferedExcelWriter and CSVWriter now implement. If you want to enable the table export to another type of writer just implement IWriter and you will have access to the WriteTable functionality.

Extra – Naming based on property

Just added in the latest preview 1.0.7 build is the ability to infer the header name based on the property or field. If no header is specified and there is only an expression, the header will be set to the provided property or field’s display name attribute (if it exists) or its own name.

class TestData
	public string Name { get; set; }
	public string Description { get; set; }
	public decimal Amount { get; set; }

ExportTableBuilder<TestData> builder = new ExportTableBuilder<TestData>()
	.AddColumn(d => d.Name) // Header will be Name2
	.AddColumn(d => d.Description) // Header will be Description
	.AddColumn("Amount2", d => d.Amount); // Header will be Amount2

This is just a convenience addition that does not add more functionality but can be more convenient in some cases.


HuLib is a library we have developed for our internal use in consulting applications. The purpose of HuLib was to avoid duplication in our consulting programs by abstracting it in an easy to use library.

The majority of our applications are written in C# and deal with AccPac, file reading/writing and SQL interaction (both with csqry and otherwise) so this is where the main focus of the library’s features lie.

Sage (AccPac) features

A complication with our writing of Sage consulting applications has been AccPac’s API. While very versatile, the API has an awkward syntax which requires a non-standard form of error detection and in the case of CSQRY (which is used to send sql queries through the Sage API) requires setting properties without explanation for it to work.

To deal with this we have abstracted away the Sage connection (you can now create a new connection with


HuAPConnection connection = new HuAPConnection(); // OR
HuAPConnection connection = new HuAPConnection("ADMIN", "ADMIN", "SAML64");

The first line above creates a new connection via the sign on manager, the second one lets you set the user / password / company manually.

HuAPConnection is Disposable (you can have it in a using statement and it will dispose of it once it is out of scope) and is the focal point for sage interactions from this point on.


using (HuView arcus = connection.OpenView("AR0024"))
    arcus["IDCUST"] = "1200";
    if (arcus.Read())
        arcus["TEXTSNAM"] = arcus["TEXTSTRE1"] // set name to address1;

View interactions are similar to using the accpac view objects directly but it is wrapped by an HuView and is retrieved from an HuAPConnection. Like connections, HuViews are disposable, however the connection also keeps track of all opened views so if you had not disposed of the view, when the connection closes it will close it for you!

In addition to OpenView, there is another similar function that returns a view called GetView, this allows you to repeatedly use the same view (if there was one that has been disposed it will clear that view and return it to you. This allows you to have a GetView function within a for loop and you do not have to worry about the overhead on creating new views, the connection will notice there is now an available view of the type you requested, clear it and return it to you.

While the raw Sage view is still accessible from the HuView, most interactions will use HuView directly. As seen above, fields are assigned and retrieved via an indexor rather than the traditional way with the Sage api: <view>.Fields.FieldByID[<id>].set_Value(<value>) which can be a bit overkill and makes code less readable when you have many of these operations.

Additionally, when performing operations with the HuView, any exceptions will be raised as an AccPacException. This way you do not need to deal with catching COMExceptions and reading from the Sage api directly the error / warning feeds which was what we had to do previously and adds much more boilerplate code than we would want.


We use mainly WPF for our desktop applications now. Using the accpac finder has presented us with many issues and as such we have developed our own version.

<hulib:HuWFEC Connection="{Binding Connection}" ViewID="BK0001" Value="{Binding Bank}"/>

The above is a line taken from one of our applications using it. You can bind to connection, value, filter. You can also set field ids that are displayed and many other settings you would expect to be present for an AccPac finder.


It looks very similar to the Sage finder (the top of the image shows the textbox and button to open the finder, the pop up below is what is shown when the button is clicked.

Other Features

There are many other quality of life features (eg a function that creates a dictionary based on optional field key-value pairs). In addition in the sections below there are features that help integrate back into AccPac (eg Mapping)

File IO

Often our customizations revolve around exporting or importing data to/from files. To assist in our development we have abstracted some of this functionality.


One of the commonalities between the different forms of files and even SQL and Sage views is that there is a feature of HuLib we called simply Mapping that will work with them. Mapping is a way to use annotations on your code to very easily extract data from different sources. For example, see: Reading CSV files

This way changes in the data source can be dealt with in the model that will store the data directly instead of having to both change the model and the code that would normally be importing it.

CSV Files

See: Reading CSV files

Using the CSVFile class you can easily load a file with just the file name then you have it all split for you (includes custom delimiters, escape character parsing). On top of that there is a mappable implementation meaning you can use mapping to just label a model class you want the data to go into and have it populated in just a couple lines.

Excel Files

See: Reading Excel Files

We have a number of different excel utilities. The latest one we have been using is the BufferedExcelReader (also, writer). Using this you do not have to manually interact with the excel interop API and can instead just use ReadString, ReadLine as if it was a standard file. If you want to access specific parts of the sheet, you can do it based on column/row with the column being in standard excel letter form or in numerical form. Or you can use the mapping implementation to immediately load data from excel into a collection of your model.

One of the main advantages of the BufferedExcelReader over our previous utilities is the buffering portion. Interop actions between excel are expensive, so the BufferedExcelReader deals with this by preloading the whole file in bulk and allowing you to interact with it in the way you want.


We have developed a number of tools to help generate code that works with HuLib immediately. These include a website whereby uploading a CSV file it provides you with a model class for that file complete with the mapping annotations. A web based view info for Sage information that allows searching. A web based application that can convert the generated macro code from AccPac into C# and then into HuLib code (Most sage code in the various blog posts here are a result of running macro code through this converter and pulling out the needed parts).

And More!

This is just a summary of some of the main features of HuLib, since we are using this as a general library for our commoon requirements from day to day we are adding more to it all the time. It would be difficult to enumerate every feature present in it.

Common Implementations – AP Invoice Import


public string Import(IEnumerable<InvoiceLine> lines)
	// TODO: To increase efficiency, comment out any unused DB links.
	using (HuAPConnection Connection = new HuAPConnection())
	using (HuView APIBC = Connection.GetView("AP0020"))
	using (HuView APIBH = Connection.GetView("AP0021"))
	using (HuView APIBD = Connection.GetView("AP0022"))
	using (HuView APIBS = Connection.GetView("AP0023"))
	using (HuView APIBHO = Connection.GetView("AP0402"))
	using (HuView APIBDO = Connection.GetView("AP0401"))
		APIBC.Browse("((BTCHSTTS = 1) OR (BTCHSTTS = 7))");
		using (HuView APIVPT = Connection.GetView("AP0039"))
			APIBC["PROCESSCMD"] = "1";  // Process Command Code
			APIBC["BTCHDESC"] = "Generated invoice batch";

			// Group invoices by vendor num and doc num
			foreach (IGrouping<string, InvoiceLine> invoice in lines.GroupBy(line => line.VendorNumber + " - " + line.DocuementNumber))
				APIBH["IDVEND"] = invoice.First().VendorNumber;  // Vendor Number
				APIBH["PROCESSCMD"] = "7";  // Process Command Code
				APIBH["PROCESSCMD"] = "4";  // Process Command Code
				APIBH["IDINVC"] = invoice.First().DocuementNumber;  // Document Number
				APIBH["DATEINVC"] = invoice.First().DocumentDate;
				APIBH["DATEBUS"] = invoice.First().PostingDate;

				// Clear default detail lines - we only want lines based on import file
				while (APIBD.Fetch())

				foreach (InvoiceLine invoiceLine in invoice)
					APIBD["PROCESSCMD"] = "0";  // Process Command Code
					APIBD["IDGLACCT"] = invoiceLine.Account;  // G/L Account
					APIBD["AMTDIST"] = invoiceLine.Amount;  // Distributed Amount

				APIBH["AMTGROSTOT"] = -(decimal)APIBH["AMTUNDISTR"];  // Document Total Including Tax
		return APIBC["CNTBTCH"].ToString();



APIBC – AP Invoice batches

APIBH – AP Invoice header

APIBD – AP Invoice detail line

Common Implementations – GL Imports

Here is the code snippet:

public void Import(IEnumerable lines, DateTime postingDate)
	// TODO: To increase efficiency, comment out any unused DB links.
	using (HuAPConnection Connection = new HuAPConnection())
	using (HuView GLBCTL = Connection.GetView("GL0008"))
	using (HuView GLJEH = Connection.GetView("GL0006"))
	using (HuView GLJED = Connection.GetView("GL0010"))
	using (HuView GLJEDO = Connection.GetView("GL0402"))

		GLBCTL["PROCESSCMD"] = "1";  // Lock Batch Switch

		GLJEH["BTCHENTRY"] = "";  // Entry Number
		GLJEH["BTCHENTRY"] = "00000";  // Entry Number
		GLJEH["SRCETYPE"] = "AP";  // Source Type
		GLJEH["DATEENTRY"] = postingDate;

		foreach (DataLine line in lines)
			GLJED["ACCTID"] = line.GLAccount;  // Account Number
			GLJED["PROCESSCMD"] = "0";  // Process switches
			GLJED["SCURNAMT"] = line.Amount;  // Source Currency Amount



HLBCTL is journal entry batches

GLJEH is the header for the journal entry

GLJED is the line items for the journal

Dynamic Fields in Views (1.0.5 feature)

HuLib has made Sage interactions much nicer using an indexer to get fields from views rather than awkward functions but there are still some annoyances. It would be nice if we didn’t always have to cast from an object, even when we are setting it to a variable of the same type.

string vendor = apibh["IDVEND"].ToString();
DateTime date = (DateTime)apibh["DATEINVC"];
decimal amount = apibh["AMTUNDISTR"];

apibh["AMTUNDISTR"] = amount;

Now HuLib has an alternative (though the above code is still perfectly valid. There is now a dynamic object called Fields which allows you to get fields out without bothering with the type casting:

string vendor = apibh.Fields.IDVEND;
DateTime date = apibh.Fields.DATEINVC;
decimal amount = apibh.Fields.AMTUNDISTR;

apibh.Fields.AMTUNDISTR = amount; // Assignment works too

The above code leverages the dynamic type functionality that was added to C# some years ago. There is no intellisense for this type and no compiler errors or warnings for whatever you add to it:

apibh.Fields.THISDOESNTEXIST; // this does not cause a compiler error

Note that this means the above code does not generate compiler errors but it can cause runtime errors. In addition, if you try and assign a string to an int, this will likewise cause a runtime error.

This is just a convenience extension to views for if you don’t feel like using indexers and casting. The implementation will function exactly the same. (Note that you cannot use put without verification with Fields).

Common Implementation – Optional Field Mapping

It is quite common that we use optional fields in Sage to store key-value pairs. The way this works in Sage is the Value column of an optional field is considered the key and the description is considered the value.

Optional Fields

To make interaction with this even easier with HuLib, include HuLib.AccPac.Client, it includes an overload for Connection that can easily pull all this data into a dictionary that is ready to use:

Dictionary<string, string> custVendorLookup = Connection.GetOptFieldMap("CUSTTOSTORE");
string vendor = custVendorLookup["1100"]; // 0083

Feel free to use this to lookup values from Sage without creating views manually!

Reading CSV files

With HuLib, reading CSV files is even easier than it was in the first place. There is a class in HuLib called CSVFile. You can instantiate it with the name of the file you want to read. From there you can call load to populate the structure and read out values from this structure, OR use the mapping framework:

public IEnumerable GetLines(string fileName)
	CSVFile file = new CSVFile(fileName);
        List mightyLines = file.CreateClasses();

	return mightyLines;

This code creates a list of MightyLine objects (which has Mapping attributes):

public class MightyLine
	public int Cust { get; set; }

	public string CustomerName { get; set; }

	public string Range { get; set; }

	public string PartNumber { get; set; }

The number, in this case, represents the column in the CSV file, that is how the data will be pulled.

To make things even easier I have created an internal tool that will figure this out for you! http://devsrv/ViewInfo2/CSV

CSV Tool

Simply enter the name of the class you want to be generated, select an example CSV file you will be loading from and hit Generate.

On the left, you have the model that will store the rows. On the right, you have code for a function that reads in a file to generate these objects. It even tries to figure out the type that you would want for the data (though be sure to verify it chose the correct one).


Happy Coding!

Importing Receipts into Sage Using HuLib

Here is another code dump, this time for importing Receipts into AccPac.

public void ImportReceipts(IEnumerable<ReceiptLine> receipts, string bankCode, DateTime receiptDate, string batchDescription)
	using (HuView ARBTA = Connection.GetView("AR0041"))
	using (HuView ARTCR = Connection.GetView("AR0042"))
	using (HuView ARTCP = Connection.GetView("AR0044"))
	using (HuView ARTCU = Connection.GetView("AR0045"))
	using (HuView ARTCN = Connection.GetView("AR0043"))
	using (HuView ARPOOP = Connection.GetView("AR0061"))
	using (HuView ARTCRO = Connection.GetView("AR0406"))
	using (HuView ARTCC = Connection.GetView("AR0170"))
		using (HuView ARPYPT = Connection.GetView("AR0049"))
			ARBTA["CODEPYMTYP"] = "CA";  // Batch Type
			ARTCR["CODEPYMTYP"] = "CA";  // Batch Type
			ARTCN["CODEPAYM"] = "CA";  // Batch Type
			ARTCP["CODEPAYM"] = "CA";  // Batch Type
			ARTCU["CODEPAYM"] = "CA";  // Batch Type
			ARPOOP["PAYMTYPE"] = "CA";  // Batch Type
			ARBTA["CODEPYMTYP"] = "CA";  // Batch Type
			ARBTA["CNTBTCH"] = "0";  // Batch Number
			ARBTA["BATCHDESC"] = batchDescription;
			ARBTA["IDBANK"] = bankCode;
			ARBTA["PROCESSCMD"] = "2";  // Process Command

			foreach (IGrouping<string, ReceiptLine> receiptLine in receipts.GroupBy(line => line.FleetID))
				ARTCR["IDCUST"] = receiptLine.Key;  // Customer Number
				ARTCR["PROCESSCMD"] = "0";  // Process Command Code

				ARTCR["DATERMIT"] = receiptDate;


				foreach (ReceiptLine line in receiptLine)
					ARTCP["IDINVC"] = line.Invoice + "-" + line.Store.PadLeft(4, '0'); ;  // Document Number
					ARTCP["AMTPAYM"] = line.EntityCredit * -1;
					ARTCP["AMTERNDISC"] = line.Discount + line.ProcessingFee;

				ARTCP["CNTLINE"] = "-1";  // Line Number
				decimal unapplied = (decimal)ARTCR["REMUNAPL"];
				ARTCR["AMTRMIT"] = unapplied* -1;  // Bank Receipt Amount
				ARTCP["CNTLINE"] = "-1";  // Line Number
				ARTCP["CNTLINE"] = "-1";  // Line Number



ARBTA is the receipt batch, we are creating one of these

ARTCR is the Receipt view, we are creating one per customer, multiple lines will be in a single invoice if they share the same customer

ARTCP is the Receipt Detail view, there is one per line here

After the line items are added we set AMTRMIT to the unapplied amount multiplied by negative one. This sets the receipt amount to match what we have set in the detail lines.

Importing AR Invoices into AccPac using HuLib

Dumping some sample HuLib code for creating AR Invoices in Sage. This is copied from one of our projects so it will have to be modified for whatever project you want to use the code for.

public void ImportInvoices(IEnumerable<InvoiceLine> lines)
	using (HuView ARIBC = Connection.GetView("AR0031"))
	using (HuView ARIBH = Connection.GetView("AR0032"))
	using (HuView ARIBD = Connection.GetView("AR0033"))
	using (HuView ARIBS = Connection.GetView("AR0034"))
	using (HuView ARIBHO = Connection.GetView("AR0402"))
	using (HuView ARIBDO = Connection.GetView("AR0401"))
		ARIBC.Browse("((BTCHSTTS = 1) OR (BTCHSTTS = 7))");
		using (HuView ARIVPT = Connection.GetView("AR0048"))
			ARIBC["PROCESSCMD"] = "1";  // Process Command

			foreach (IGrouping<string, InvoiceLine> invoiceLine in lines.GroupBy(line => line.Invoice))
				ARIBH["IDCUST"] = invoiceLine.First().BID;  // Customer Number
				ARIBH["PROCESSCMD"] = "4";  // Process Command
				ARIBH["INVCTYPE"] = "2";  // Invoice Type
				ARIBH["IDINVC"] = invoiceLine.First().Invoice + "-" + invoiceLine.First().Store.PadLeft(4, '0');  // Document Number
				ARIBH["DATEINVC"] = invoiceLine.First().InvDate;


				ARIBD["PROCESSCMD"] = "0";  // Process Command Code

				foreach (InvoiceLine line in invoiceLine)
					ARIBD["PROCESSCMD"] = "0";  // Process Command Code
					ARIBD["IDDIST"] = GetDistributionCode();  // Distribution Code
					ARIBD["AMTEXTN"] = line.grosssales - line.promototal;  // Extended Amount w/ TIP



Say we are given a collection of invoice detail lines as input.

ARIBC is the batch view, we want to create a new batch to put all the invoices in.

ARIBH is the invoice view, we want to group all of the detail lines by the invoice number and create a new invoice for each.

ARIBD is the invoice detail line view, we want a new detail line for each line in the data that we are importing.

Reading Excel Files

There are multiple methods in HuLib for reading excel files. The latest addition is the BufferedExcelReader. BufferedExcelReader is meant to replace older methods as it is much much more performant. It achieves this by grabbing all excel data in bulk at the beginning and then providing methods to interact with the reader directly to get data out.

Creating an instance

To create an instance, just call the constructor with the filename:
BufferedExcelReader reader = new BufferedExcelReader(fileName);
Now you are ready to read from it! If you want to switch the sheet you are reading you can use:
Note that initially, the sheet will be the active workbook sheet.

Reading data directly

There are two main ways to do the reading.

1. Using the reader

This is convenient when column numbers can change and you want to read data row by row and not have to keep track of the position manually:
string[] readLine = reader.ReadStrings();
object[] readData = reader.ReadLine();
ReadStrings() will convert everything to a string while ReadLine() will give you the raw types. Each time you call this the program will jump to the next line so you just have to deal with the data that comes out of it.

2. Use the sheet directly

If you want to use direct locations then you can use the sheet directly:
BufferedExcelReader.Sheet sheet = reader.SelectedSheet;
object read = sheet.Read("B8");
read = sheet.Read("B", 8);
read = sheet.Read(8, 2);
These 3 methods return the same thing, note that the third call is using the more common order of arguments where row is first (looks reversed from the first 2 methods).

Mapping with BufferedExcelReader

The quickest way to read tables off of Excel is to take advantage of the Mapping framework.
To use the mapping framework we need 2 things: A model and an object that implements the mappable interface. In this case a Sheet is a mappable object so we can use it to generate classes for us.

1. The Model

The model is just a class with properties, the only difference is we must define the Mapping attribute on each property to identify the column that will contain the data.
public class ReceiptLine
 public string Store { get; set; }
 public string Invoice { get; set; }
 public string Date { get; set; }
You can see that the argument for the Mapping attribute is the column that the data will be pulled from.

2. The Mappable Object

Now we want to populate a collection of ReceiptLines from an excel file, this is trivial:
BufferedExcelReader reader = new BufferedExcelReader(fileName);
BufferedExcelMap bufferedExcelMap = reader.SelectedSheet.GetMap(9);
List receiptLines = bufferedExcelMap.CreateClasses();
First we create the reader, then we use the selected sheet and call GetMap – this returns an object that we use for mapping, the argument 9 is the row where the data starts (not the header). In this example A9 will be the start of the data. Using this BufferedExcelMap object we just use CreateClasses() to create all the rows of data as we would from other mappable objects.
Another example:
public IEnumerable GetReceiptLines(string fileName)
	BufferedExcelReader reader = new BufferedExcelReader(fileName);
	BufferedExcelReader.Sheet sheet = reader.Sheets.First(s => s.Name == "Invoice Summary");
	BufferedExcelMap bufferedExcelMap = sheet.GetMap(7);
	List fleetLines = bufferedExcelMap.CreateClasses();

	return fleetLines;



Comparing this to using the older excel method that did not preload data on a file that had only about 200 rows:
Old method: 49 seconds
New method: 2 seconds – note that now the time is spent in the constructor, the mapping part is very quick
The difference is entirely due to reducing the number of interop calls into excel by fetching all data at the start.