Month: October 2017

Common Implementations – GL Imports

Here is the code snippet:

public void Import(IEnumerable lines, DateTime postingDate)
{
	// TODO: To increase efficiency, comment out any unused DB links.
	using (HuAPConnection Connection = new HuAPConnection())
	using (HuView GLBCTL = Connection.GetView("GL0008"))
	using (HuView GLJEH = Connection.GetView("GL0006"))
	using (HuView GLJED = Connection.GetView("GL0010"))
	using (HuView GLJEDO = Connection.GetView("GL0402"))
	{
		GLBCTL.Compose(GLJEH);
		GLJEH.Compose(GLBCTL, GLJED);
		GLJED.Compose(GLJEH, GLJEDO);
		GLJEDO.Compose(GLJED);

		GLBCTL.RecordCreate(1);
		GLBCTL.Read();
		GLBCTL["PROCESSCMD"] = "1";  // Lock Batch Switch
		GLBCTL.Process();

		GLJEH["BTCHENTRY"] = "";  // Entry Number
		GLJEH.Browse("");
		GLJEH.Fetch();
		GLJEH["BTCHENTRY"] = "00000";  // Entry Number
		GLJEH.RecordCreate(2);
		GLJEH["SRCETYPE"] = "AP";  // Source Type
		GLJEH["DATEENTRY"] = postingDate;
		GLJED.RecordClear();

		foreach (DataLine line in lines)
		{
			GLJED.RecordCreate(0);
			GLJED["ACCTID"] = line.GLAccount;  // Account Number
			GLJED["PROCESSCMD"] = "0";  // Process switches
			GLJED.Process();
			GLJED["SCURNAMT"] = line.Amount;  // Source Currency Amount
			GLJED.Insert();
		}
		GLJEH.Insert();
	}
}

 

Explanation:

HLBCTL is journal entry batches

GLJEH is the header for the journal entry

GLJED is the line items for the journal

Advertisements

Dynamic Fields in Views (1.0.5 feature)

HuLib has made Sage interactions much nicer using an indexer to get fields from views rather than awkward functions but there are still some annoyances. It would be nice if we didn’t always have to cast from an object, even when we are setting it to a variable of the same type.

string vendor = apibh["IDVEND"].ToString();
DateTime date = (DateTime)apibh["DATEINVC"];
decimal amount = apibh["AMTUNDISTR"];

apibh["AMTUNDISTR"] = amount;

Now HuLib has an alternative (though the above code is still perfectly valid. There is now a dynamic object called Fields which allows you to get fields out without bothering with the type casting:

string vendor = apibh.Fields.IDVEND;
DateTime date = apibh.Fields.DATEINVC;
decimal amount = apibh.Fields.AMTUNDISTR;

apibh.Fields.AMTUNDISTR = amount; // Assignment works too

The above code leverages the dynamic type functionality that was added to C# some years ago. There is no intellisense for this type and no compiler errors or warnings for whatever you add to it:

apibh.Fields.THISDOESNTEXIST; // this does not cause a compiler error

Note that this means the above code does not generate compiler errors but it can cause runtime errors. In addition, if you try and assign a string to an int, this will likewise cause a runtime error.

This is just a convenience extension to views for if you don’t feel like using indexers and casting. The implementation will function exactly the same. (Note that you cannot use put without verification with Fields).

Common Implementation – Optional Field Mapping

It is quite common that we use optional fields in Sage to store key-value pairs. The way this works in Sage is the Value column of an optional field is considered the key and the description is considered the value.

Optional Fields

To make interaction with this even easier with HuLib, include HuLib.AccPac.Client, it includes an overload for Connection that can easily pull all this data into a dictionary that is ready to use:

Dictionary<string, string> custVendorLookup = Connection.GetOptFieldMap("CUSTTOSTORE");
string vendor = custVendorLookup["1100"]; // 0083

Feel free to use this to lookup values from Sage without creating views manually!

Reading CSV files

With HuLib, reading CSV files is even easier than it was in the first place. There is a class in HuLib called CSVFile. You can instantiate it with the name of the file you want to read. From there you can call load to populate the structure and read out values from this structure, OR use the mapping framework:

public IEnumerable GetLines(string fileName)
{
	CSVFile file = new CSVFile(fileName);
	file.Load();
        List mightyLines = file.CreateClasses();

	return mightyLines;
}

This code creates a list of MightyLine objects (which has Mapping attributes):

public class MightyLine
{
	[Mapping(0)]
	public int Cust { get; set; }

	[Mapping(1)]
	public string CustomerName { get; set; }

	[Mapping(2)]
	public string Range { get; set; }

	[Mapping(3)]
	public string PartNumber { get; set; }
}

The number, in this case, represents the column in the CSV file, that is how the data will be pulled.

To make things even easier I have created an internal tool that will figure this out for you! http://devsrv/ViewInfo2/CSV

CSV Tool

Simply enter the name of the class you want to be generated, select an example CSV file you will be loading from and hit Generate.

On the left, you have the model that will store the rows. On the right, you have code for a function that reads in a file to generate these objects. It even tries to figure out the type that you would want for the data (though be sure to verify it chose the correct one).

 

Happy Coding!

Importing Receipts into Sage Using HuLib

Here is another code dump, this time for importing Receipts into AccPac.

public void ImportReceipts(IEnumerable<ReceiptLine> receipts, string bankCode, DateTime receiptDate, string batchDescription)
{
	using (HuView ARBTA = Connection.GetView("AR0041"))
	using (HuView ARTCR = Connection.GetView("AR0042"))
	using (HuView ARTCP = Connection.GetView("AR0044"))
	using (HuView ARTCU = Connection.GetView("AR0045"))
	using (HuView ARTCN = Connection.GetView("AR0043"))
	using (HuView ARPOOP = Connection.GetView("AR0061"))
	using (HuView ARTCRO = Connection.GetView("AR0406"))
	using (HuView ARTCC = Connection.GetView("AR0170"))
	{
		ARBTA.Compose(ARTCR);
		ARTCR.Compose(ARBTA, ARTCN, ARTCP, ARTCRO, ARTCC);
		ARTCP.Compose(ARTCR, ARTCU, ARPOOP);
		ARTCU.Compose(ARTCP);
		ARTCN.Compose(ARTCR);
		ARPOOP.Compose(ARBTA, ARTCR, ARTCN, ARTCP, ARTCU);
		ARTCRO.Compose(ARTCR);
		ARTCC.Compose(ARTCR);
		using (HuView ARPYPT = Connection.GetView("AR0049"))
		{
			ARBTA.RecordClear();
			ARBTA["CODEPYMTYP"] = "CA";  // Batch Type
			ARTCR["CODEPYMTYP"] = "CA";  // Batch Type
			ARTCN["CODEPAYM"] = "CA";  // Batch Type
			ARTCP["CODEPAYM"] = "CA";  // Batch Type
			ARTCU["CODEPAYM"] = "CA";  // Batch Type
			ARPOOP["PAYMTYPE"] = "CA";  // Batch Type
			ARPOOP.Cancel();
			ARBTA["CODEPYMTYP"] = "CA";  // Batch Type
			ARBTA["CNTBTCH"] = "0";  // Batch Number
			
			
			ARBTA.RecordCreate(1);
			ARBTA["BATCHDESC"] = batchDescription;
			ARBTA["IDBANK"] = bankCode;
			ARBTA["PROCESSCMD"] = "2";  // Process Command
			ARBTA.Process();
			ARBTA.Update();

			foreach (IGrouping<string, ReceiptLine> receiptLine in receipts.GroupBy(line => line.FleetID))
			{
				ARTCR.RecordCreate(2);
				ARTCR["IDCUST"] = receiptLine.Key;  // Customer Number
				ARTCP.Cancel();
				ARTCR["PROCESSCMD"] = "0";  // Process Command Code
				ARTCR.Process();

				ARTCR["DATERMIT"] = receiptDate;

				ARTCP.RecordClear();

				foreach (ReceiptLine line in receiptLine)
				{
					ARTCP.RecordCreate(0);
					ARTCP["IDINVC"] = line.Invoice + "-" + line.Store.PadLeft(4, '0'); ;  // Document Number
					ARTCP["AMTPAYM"] = line.EntityCredit * -1;
					ARTCP["AMTERNDISC"] = line.Discount + line.ProcessingFee;
					ARTCP.Insert();
				}

				ARTCP["CNTLINE"] = "-1";  // Line Number
				ARTCP.Read();
				decimal unapplied = (decimal)ARTCR["REMUNAPL"];
				ARTCR["AMTRMIT"] = unapplied* -1;  // Bank Receipt Amount
				ARTCP["CNTLINE"] = "-1";  // Line Number
				ARTCP["CNTLINE"] = "-1";  // Line Number
				ARTCP.Read();
				ARTCR.Insert();
			}
		}
	}
}

 

Explanation

ARBTA is the receipt batch, we are creating one of these

ARTCR is the Receipt view, we are creating one per customer, multiple lines will be in a single invoice if they share the same customer

ARTCP is the Receipt Detail view, there is one per line here

After the line items are added we set AMTRMIT to the unapplied amount multiplied by negative one. This sets the receipt amount to match what we have set in the detail lines.

Importing AR Invoices into AccPac using HuLib

Dumping some sample HuLib code for creating AR Invoices in Sage. This is copied from one of our projects so it will have to be modified for whatever project you want to use the code for.

public void ImportInvoices(IEnumerable<InvoiceLine> lines)
{
	using (HuView ARIBC = Connection.GetView("AR0031"))
	using (HuView ARIBH = Connection.GetView("AR0032"))
	using (HuView ARIBD = Connection.GetView("AR0033"))
	using (HuView ARIBS = Connection.GetView("AR0034"))
	using (HuView ARIBHO = Connection.GetView("AR0402"))
	using (HuView ARIBDO = Connection.GetView("AR0401"))
	{
		ARIBC.Compose(ARIBH);
		ARIBH.Compose(ARIBC, ARIBD, ARIBS, ARIBHO, null);
		ARIBD.Compose(ARIBH, ARIBC, ARIBDO);
		ARIBS.Compose(ARIBH);
		ARIBHO.Compose(ARIBH);
		ARIBDO.Compose(ARIBD);
		ARIBC.Browse("((BTCHSTTS = 1) OR (BTCHSTTS = 7))");
		using (HuView ARIVPT = Connection.GetView("AR0048"))
		{
			ARIBC.RecordCreate(1);
			ARIBC["PROCESSCMD"] = "1";  // Process Command
			ARIBC.Process();
			ARIBC.Read();

			foreach (IGrouping<string, InvoiceLine> invoiceLine in lines.GroupBy(line => line.Invoice))
			{
				ARIBH.RecordCreate(2);
				ARIBD.Cancel();
				ARIBH["IDCUST"] = invoiceLine.First().BID;  // Customer Number
				ARIBH["PROCESSCMD"] = "4";  // Process Command
				ARIBH["INVCTYPE"] = "2";  // Invoice Type
				ARIBH.Process();
				ARIBH["IDINVC"] = invoiceLine.First().Invoice + "-" + invoiceLine.First().Store.PadLeft(4, '0');  // Document Number
				ARIBH["DATEINVC"] = invoiceLine.First().InvDate;
				ARIBD.RecordClear();

				ARIBD.RecordCreate(0);

				ARIBD["PROCESSCMD"] = "0";  // Process Command Code
				ARIBD.Process();

				foreach (InvoiceLine line in invoiceLine)
				{
					ARIBD.RecordClear();
					ARIBD.RecordCreate(0);
					ARIBD["PROCESSCMD"] = "0";  // Process Command Code
					ARIBD.Process();
					ARIBD["IDDIST"] = GetDistributionCode();  // Distribution Code
					ARIBD["AMTEXTN"] = line.grosssales - line.promototal;  // Extended Amount w/ TIP
					ARIBD.Insert();
				}
				
				ARIBH.Insert();
			}
		}
	}
}

 

Explanation

Say we are given a collection of invoice detail lines as input.

ARIBC is the batch view, we want to create a new batch to put all the invoices in.

ARIBH is the invoice view, we want to group all of the detail lines by the invoice number and create a new invoice for each.

ARIBD is the invoice detail line view, we want a new detail line for each line in the data that we are importing.

Reading Excel Files

There are multiple methods in HuLib for reading excel files. The latest addition is the BufferedExcelReader. BufferedExcelReader is meant to replace older methods as it is much much more performant. It achieves this by grabbing all excel data in bulk at the beginning and then providing methods to interact with the reader directly to get data out.

Creating an instance

To create an instance, just call the constructor with the filename:
BufferedExcelReader reader = new BufferedExcelReader(fileName);
Now you are ready to read from it! If you want to switch the sheet you are reading you can use:
reader.GoToSheet("MySheetName");
Note that initially, the sheet will be the active workbook sheet.

Reading data directly

There are two main ways to do the reading.

1. Using the reader

This is convenient when column numbers can change and you want to read data row by row and not have to keep track of the position manually:
string[] readLine = reader.ReadStrings();
object[] readData = reader.ReadLine();
ReadStrings() will convert everything to a string while ReadLine() will give you the raw types. Each time you call this the program will jump to the next line so you just have to deal with the data that comes out of it.

2. Use the sheet directly

If you want to use direct locations then you can use the sheet directly:
BufferedExcelReader.Sheet sheet = reader.SelectedSheet;
object read = sheet.Read("B8");
read = sheet.Read("B", 8);
read = sheet.Read(8, 2);
These 3 methods return the same thing, note that the third call is using the more common order of arguments where row is first (looks reversed from the first 2 methods).

Mapping with BufferedExcelReader

The quickest way to read tables off of Excel is to take advantage of the Mapping framework.
To use the mapping framework we need 2 things: A model and an object that implements the mappable interface. In this case a Sheet is a mappable object so we can use it to generate classes for us.

1. The Model

The model is just a class with properties, the only difference is we must define the Mapping attribute on each property to identify the column that will contain the data.
public class ReceiptLine
{ 
 [Mapping("A")]
 public string Store { get; set; }
 [Mapping("B")]
 public string Invoice { get; set; }
 [Mapping("C")]
 public string Date { get; set; }
}
You can see that the argument for the Mapping attribute is the column that the data will be pulled from.

2. The Mappable Object

Now we want to populate a collection of ReceiptLines from an excel file, this is trivial:
BufferedExcelReader reader = new BufferedExcelReader(fileName);
BufferedExcelMap bufferedExcelMap = reader.SelectedSheet.GetMap(9);
List receiptLines = bufferedExcelMap.CreateClasses();
First we create the reader, then we use the selected sheet and call GetMap – this returns an object that we use for mapping, the argument 9 is the row where the data starts (not the header). In this example A9 will be the start of the data. Using this BufferedExcelMap object we just use CreateClasses() to create all the rows of data as we would from other mappable objects.
Another example:
public IEnumerable GetReceiptLines(string fileName)
{
	BufferedExcelReader reader = new BufferedExcelReader(fileName);
	BufferedExcelReader.Sheet sheet = reader.Sheets.First(s => s.Name == "Invoice Summary");
	BufferedExcelMap bufferedExcelMap = sheet.GetMap(7);
	List fleetLines = bufferedExcelMap.CreateClasses();

	return fleetLines;
}

 

Performance

Comparing this to using the older excel method that did not preload data on a file that had only about 200 rows:
Old method: 49 seconds
New method: 2 seconds – note that now the time is spent in the constructor, the mapping part is very quick
The difference is entirely due to reducing the number of interop calls into excel by fetching all data at the start.