Latest Publications

Solved! Moving the ECB to the Title column in a Document Library without breaking it

I’ve been searching around, and apparently so have a few other people for a way to move the link/menu from the File Name column in a Document Library to another column and preserve all of the integration functionality.

There were SharePoint Designer based hacks that SORT of did this by letting you move it by setting the ListItemMenu property of a particular column… but this broke two things:
1) Clicking on the document takes you to the properties instead of the document.
2) The Open with “Microsoft Word | Excel | etc” part didn’t work.

No solution had thus been forthcoming so I buckled down and did it. Instead of posting it here, I put it on CodeProject.com. Check it out.

Not only did I enable the Title column with the ECB, but it defaults back to the file name if there is no title.

Hopefully someone finds it useful. In my case it was useful because I was dealing with a couple of thousand imported documents whose file names had no bearing on their human readable Titles.

SPUtility.TransferToSuccessPage : Here is The Missing Documentation

Ok, so the SPUtility class is chock full of useful little doodads and widgets that try to make life easier for SharePoint developers. Actually, scratch that, it’s there to make life easier for the Microsoft team who develops SharePoint itself, but they’ve kindly let us have a bunch of public methods. One of the nifty features is the TransferToSuccessPage method and as you can see here, it’s very poorly documented. On top of that, it has some less than obvious behavior, so that lack of documentation is sometimes painful. There are millions of blog posts out there that touch on it, mention it, and complain about its lack of official documentation, so I won’t belabor that point; however, here’s the things you have to know when using it (specifically, the four string overload):


SPUtility.TransferToSuccessPage(message, nextUrl, linkText, linkUrl);


I typically use it like this

      
      SPUtility.TransferToSuccessPage(
                    "Successfully fooed your bar." +
                    " Click {0} to go back to the web root." + "
                    " Click Ok to proceed back to the list.",
                     someSPList.DefaultViewUrl
                        .Replace(Web.ServerRelativeUrl, 
                        string.Empty), 
                     "Here", 
                     Web.Url);


      // +s in the strings are for
      // your viewing pleasure, I don't 
      // usually do that
    
  • The ‘nextUrl,’ which is where you go when you click ‘OK’ needs to be relative to the site (SPWeb) root, NOT the site collection root, and NOT the server root, and NOT full URL. You can see above I’ve taken a server-relative URL and stripped out the server relative URL of the site. This is because the ‘next url’ is crafted like this, (with the + operator) in the code:


    currentSPContext.Web.Url + nextUrl;


  • The {0} in the ‘message’ parameter is what is replaced by the generated link. It specifically creates an a-tag like this

    <a href="{linkUrl}">{linkText}</a>

    You can specify a full URL, but if you specify a relative URL it will be relative to ~/_layouts/success.aspx. In case you didn’t notice, ~ is the WEB APPLICATION root. Not the site collection root. (They may be the same on your dev box, throwing some confusion your way when you go to production.) This is because SPUtility actually just straight up calls Server.Transfer(“~/_layouts/success.aspx”) which has no knowledge of SharePointyness like site collections and such.

  • If you do not include a {0} in your message string, the last two parameters, linkText and linkUrl, are ignored (for all intents and purposes that is. Success.aspx still fiddles with them but it doesn’t affect your output)

  • This will NOT work in a workflow or a SharePoint event handler because it requires an HttpContext, and those things don’t usually have one.

  • Because it calls Server.Transfer, It WILL throw a first-chance ThreadAbortException so don’t call it from within a “catch-all” try block unless you plan to ignore the ThreadAbortException. See the MSDN Docs on Server.Transfer for info.

There you have it.

SPFile.Checkout() and modifying meta data properties

It seems like it should be pretty straightforward. (But then again, nothing ever is when it comes to SharePoint development.) You check a file out, modify its metadata properties and check it back in. I spent something on the order of three hours trying to figure out why that kept blowing up on me when I found a subtle stupid thing that, of course, isn’t explicitly documented.

I thought this code should have worked…

C#


    var list = Web.Lists[listName];
    var item = list.GetItemByUniqueId(new Guid(uniqueId));

    Web.AllowUnsafeUpdates = true;
    var folder = item.File.ParentFolder;

    var checkoutRequired = list.ForceCheckout;

    if (checkoutRequired) {
        item.File.CheckOut();
    }

    var file = folder.Files.Add(listItem.File.Name, 
        listItem.File.OpenBinaryStream(), true,
        "Published by the system after approvals, Pending check-in.", true);

    if (checkoutRequired) {
         file.CheckIn("Automated Go Live Check-in by the Workflow Process");
         file.CheckOut();
    }

   // todo: set all the metadata here
   var linkUrlValue = new SPFieldUrlValue();
   linkUrlValue.Description = "Log Link";
   linkUrlValue.Url = WorkflowInformation.EventLogLink;
   item["LogLink"] = linkUrlValue;

   // explode! "document modified by 'domain\username' at {Now}"
   item.UpdateOverwriteVersion(); 
                    
   if(checkoutRequired)
       file.CheckIn("Automated Go Live Check-in by the Workflow Process", 
            SPCheckinType.OverwriteCheckIn);
                    
   Web.AllowUnsafeUpdates = false;
                    

…But it didn’t. It kept blowing up. I tried various different things, and I couldn’t figure out what was going on. Then I noticed something subtle that all of the examples out there did: They used the file.Item property of the checked out document instead of the original item. I still don’t know why this makes a difference, but if you use it, it works. If you use the ‘original’ item reference instead of the file.Item, it yaks. Below is the only slightly different, yet infinitely more ‘working’ code.

C#


     var list = Web.Lists[listName];
    var item = list.GetItemByUniqueId(new Guid(uniqueId));

    Web.AllowUnsafeUpdates = true;
    var folder = item.File.ParentFolder;

    var checkoutRequired = list.ForceCheckout;

    if (checkoutRequired) {
        item.File.CheckOut();
    }

    var file = folder.Files.Add(listItem.File.Name, 
        listItem.File.OpenBinaryStream(), true,
        "Published by the system after approvals, Pending check-in.", true);

    if (checkoutRequired) {
         file.CheckIn("Automated Go Live Check-in by the Workflow Process");
         file.CheckOut();
    }

   // the CRAZY thing here is you can't use another 
   // reference to the item if the place requires a checkout, even if 
   // it's the one from whence you originally got the file reference 
   // in the first place.
   // You MUST use the file.Item reference to the item or it breaks when 
   // you set the metadata.

   // Your humble author and narrator spent 3 hours trying to figure this out.
                    

   // todo: set all the metadata here
   var linkUrlValue = new SPFieldUrlValue();
   linkUrlValue.Description = "Log Link";
   linkUrlValue.Url = WorkflowInformation.EventLogLink;
   file.Item["LogLink"] = linkUrlValue;
   file.Item.UpdateOverwriteVersion();
                    
   if(checkoutRequired)
       file.CheckIn("Automated Go Live Check-in by the Workflow Process",   
            SPCheckinType.OverwriteCheckIn);
                    
   Web.AllowUnsafeUpdates = false;
                    

The end of the world?

Back when I was a kid, I remember having something called, (astonishingly politically incorrectly,) the “Indian Weather Rock.” It was a knick-knack that sat on a shelf somewhere and the trick was that it would tell you about the weather. You put it outside when you wanted a reading: if the rock was wet, it’s raining. If it’s white, it was snowing… etc. It was an amazing piece of human engineering.

I thought, with all of the tumultuous events of today, be they financial systems collapsing, Mayan calendars running out of squares into which we can write the year, famous people changing careers, disastrous political machinations taking shape left and right, what the world needs is a digital way to understand exactly what kind of mess we’re in. So, I’ve put together what may be the most comprehensive computing device I’ve ever made, and let me tell you, it’s almost NEVER wrong. Go here (click) and check out some of it’s unequivocal magic.

B+ Trees for the masses

Roger has done, in the generic, what nobody I’ve found to date has: He’s implemented an open sourced, B+ Tree library for .NET that scales well and is completely thread safe. I’ve tried making one myself, and while I did manage to implement a b+ tree, it wasn’t threadsafe and linear scaling into the hundreds of millions of items… this one is.

I’ve chatted with another Mr. Mehdi Gholam who has his own working but not-so-generic version of a B+ Tree in RaptorDb, and seen countless people asking for one all over the place in message boards and forums. Well there it is. I know it’s not much hearing me say it, but free software isn’t free for the guy who writes it, so… thanks, Roger. Great work!

Bring on the rocks.

If you’ve worked hard to acquire a talent, or you’ve busted your rear end in the gym, if you’ve spent hours, days, or weeks creating some kind of art or a kickass computer game, I have no problem with you telling me about it. There’s nothing wrong with being labelled ego-centric or being called a ‘show off’ by the people who haven’t done these things themselves. It should be inspiring to other people, not cause them to throw rocks. Jealousy is a lot uglier than pride. Bring on the proud people, and the rocks.

Runaround: An old school puzzle game (My first XNA game.)

This is actually a rewrite of a game I made when I was 13 on my good old Mac Classic (it was black and white.) It plays the same in this version, except back then my brother was designing the levels so they were a lot harder and cooler.

Click here to try it out

Screenshot

It automatically checks for updates, so when I get around to writing new ones, they’ll come down automatically.

I made it easy to modify the levels, and if you’re enterprising enough to create an 18 line text file, you can make your own:

Object

Get all of the jewels (gold for now, the tiles suck) and get to the exit (which will open once you have all of them.) Push blocks into holes to fill them in so you can cross. Blocks can also go through one way doors if there is something acceptable on the other side.

Controls

Arrows: Move

Space: Kill yourself

Adding new maps:

Map files are simple text files of 18 lines with 18 characters per line. Each character corresponds to a tile.
The tile to character map is listed below.

Copy the template below into a new text file, name it LevelX.map where X is the number of level it is. (like Level12.map for level 12.) They have to be in sequence. Put the LevelX.map in the Content folder. It’s a clickonce app so I’ll have to figure out where the content folder is. An easy way to find it is search for Level1.map from the start menu. Yeah kids, it’s XNA so windows only.

Edit each of the characters in the map file as shown in the chart below. Just change the one in the template to be whatever you want it to be. Note, it IS possible to make a map that you can’t beat, so that part is up to you.

chr	title		  
---------------
(.) Empty         
(N) PlayerUp      
(S) PlayerDown    
(W) PlayerLeft    
(E) PlayerRight   
(T) Tree 
($) Jewel 
(@) Rock   
(-) Hole          
(+) MovableBlock   
(L) OneWayLeft     
(R) OneWayRight    
(U) OneWayUp       
(D) OneWayDown    
(O) ClosedExit    
(X) OpenExit     

Map File Template (MUST be 18x18)  (this is an example map template)

@@@@@@@@@@@@@@@@@@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@................@
@@@@@@@@@@@@@@@@@@

Email me your cool map files to dave dotdolan at gmail, or post them in the comments.

Recent Changes:

Made map files be 18×18 (previously was 16×16)
Fixed a bug in pushing bricks through one way doors.
Refactored the code so that it’s closer to an MVCish pattern -> still not quite there
new rock tile
added sounds other than system beeps

Todo:
Status screen -> Number of moves, Crystals Left, current level, (death count?)
Opening splash screen (pick your level -> level preview/selection screen)
transition effects (pushing the block, dying, getting some gold)
animated tiles

Future:
level editor that looks nicer than text files.
scrollable boards that exceed 18 x 18

I’m going to eventually post the source somewhere but it’s nothing really interesting. Email me if you want it.

The Whole Shebang: Implementing a general purpose language interpreter using GOLD Parser Builder and the new BSN Engine

Update: The source code for this project has been added to a Google Code repository and will be updated there.

Second update: The article appears, further updated, on CodeProject. Go check it out there! (I’ve removed it from this site.)

A Misadventure in (someone else’s) GoF Land

So, here I go again. I don’t want my sparsely populated blog to turn into a collection of articles from the Daily WTF, but I do want to use a real world example, with the names changed to protect the innocent and guilty, to illustrate a point.

Design patterns, object oriented design, and the employment thereof can be just as much a cure in search of a disease as they are a cure for a disease. I’ve recently run into this lock, stock, and barrel in a project with my unnamed employer.

There is this project, let’s call it FLEA, which is a relatively straightforward web forms application. Written as a front end to handle some day to day process for a particular customer, it was made out to be some kind of shimmering example of, well what the architect might have said “Stuff I read on the back cover of a book in the lavatory.”

It had “everything” from service locators, coding against interfaces, and perhaps most glaringly, a monstrous construct that was referred to as “The Repository Pattern on steroids.”

It’s one thing to permit reuse of code by decoupling interface from implementation, but it’s quite another to hobble the developer by only making available interfaces that operate on fully populated domain objects.

I saw places where they would set a flag in a table record by loading an object, all of it’s sub-objects, and sub-objects, with carefully constructed IF-THEN logic to subvert graph loops, then set ‘myPerson.Disabled = true; PersonRepository.Save(myPerson);’. Now its fine if you present that interface, then detect that only the Disabled flag changed, and update just that, but what I found was quite another beast: It proceeded to delete half of the sub-objects, re-add them, then ignore the rest of them, and do a multi-table update of the full object (without using a join, anywhere in the logic) creating a new database connection for every other call. Needless to say, this worked brilliantly when the thing only loaded 2 or 3 master objects but when it was scaled to include ten thousand users, disabling a user, and I use this phrase QUITE LITERALLY sent the server out to lunch. You could go to the corner store and back before it finished running a top-down depth loop of connection grabbing calls to update stuff.

It’s still not probably quite clear the degree of repository extremism I’m talking about here:

We had:
PersonRepository (with methods to GetAll, GetByLocation, GetById, Update, and Delete)
PersonLocationRepository (which was called repeatedly for every person loaded, even when displaying them in a dialog box, to avoid having to learn what a join is.)
PersonLocationActivityRepository
ActivityRepository
PlacesRepository
PersonEthnicityRepositiry
PersonJobCodeRepository
PersonHRRecordRepository
PersonMotherInLawRepository (Ok, I’m making that up.)
plus about 20 more (not exaggerating, I swear.)

That variety alone is not a problem. But calling LoadPerson or whatever the actual routine was called invoked a service locator, which loaded a bunch of other things, which called a service layer, which called a business logic layer, which then loaded a repository, which loaded a DAL object for each one of these things, DEEP. So that loading a person ended up instantiating a generic service loader factory, which created a service layer factory, and then proceeded to call a business logic layer factory, to create a DAL factory, and eventually instantiate a DAL object, to read the ethinicity of a person. Then all of those were destroyed, and the PersonHrRecord was loaded with a new chain of the same stuff.

Loading the user administration page literally kicked off 300,000 database connections, and called half a million stored procedures, loaded ALL of the data client side, and filtered out the results with fancy anonymous method calls to IList enumerators. And just because they realized what a hulking mess this was on the database, all of the results were chucked into ViewState, which to save time was chucked into the Session.

I, along with two others, retooled the entire project using Linq To SQL in about a month of half time working on this, but wow it was amazing.

If it would have been a well crafted joke, it would have been too meticulously executed to be even considered funny.

My conclusion is that it’s actually bad because of the way they made the database calls. Not the patterns themselves. There was also a demonstrable lack of understanding of what the implications of iteratively looping and loading are for performance.

The theory, I was told, was to make it reusable. From a design perspective, it looked like they might have been able to do that, but when I looked at what the actual layers were doing, apart from what they looked like in UML diagrams, MAN was it a mess.

Put another way, it’s one thing to design your application for reusablity, but quite another to implement it in the same spirit. At the outset of the project, it probably was originally, long ago, in a galaxy far away, a good idea to design a service layer separate from business logic layer, and a DAL layer, etc. But to just stop there and make the rest ‘work somehow’ without concern for the consequences of the ways the data actually got into the bottom bits.. and the way that it was unfiltered until the top bits, that was just silly. Of course I shouldn’t blame the pattern, but if they hadn’t been aiming for that methodology, as opposed to the operations that they were actually performing underneath it, it might have been a better ending.

Sure it’s common to say ‘That’s just an implementation detail’ when you’re designing, and scoff it at for a while until it comes time to implement it, but holy crap, when it comes time to implement it, all you HAVE are the implementation details. At that point they not only deserve SOME attention, but ALL of it.

This is the difficulty with GoF: Pretty Conceptual Pictures. People think they understand these, and it gives them the confidence to go implement this stuff without learning the other fundamentals. Like, well, SQL. (I’ve actually heard things like “I’m a C# developer, I don’t really DO SQL!” If I had my way, I’d put them in a corner with a command line OSQL.exe tool for the next three months and not let them come back until they’d gone through Books Online from top to bottom three times after that statement.) Pretty pictures like this also make it easy to gloss over an actual lack of planning on the bottom so that management doesn’t know they’ve bought into a dud until the delivered app kills the clients database farm.

The goal of an application is to have it work. Whatever abstractions you put on it to make it ‘easier to develop’ are not billable features to the client. You do not get extra line items on the proposal for Service Locator Patterns, and certainly that’s not a justification for screwing up the actual project.

Customer’s Boss: “This thing is a piece of crap, it kills our servers!”
Developer: “But it was completely decoupled from the implementation!”
Customer’s Boss: “Oh, right, sorry, I guess you did actually deliver what we ordered. I’m glad you know what you’re doing when it comes to object oriented design. So many people focus solely on the final product, without so much as a thought devoted to the design process. Clearly you have attended one of our finest higher educational institutions to know such things. And besides, I didn’t realize you were a SENIOR developer, I should be yelling at a grunt code right now. You shouldn’t be troubled with implementation details.”
Developers Boss: “I smell a RAISE for one of our brightest SENIOR developers!”
Developer: Nancy Kerrigan and I are going to Disney Land!

Parameterizable SharePoint SqlQueryWebPart

I’ve been so frustrated with the lack of a flexible way to just display the output of a raw SQL Query via SharePoint so I wrote one.

At first I thought I didn’t need parameters, but I was wrong. Everyone needs parameters, or else it’s not all that useful. Of course, that doesn’t mean you can’t still do parameterization correctly. So I took this lemon-like opportunity to demonstrate how to [make lemonade and...]

  • Write Web Parts for WSS3
  • Integrate a Parameterizable SQL Query (with a few security caveats I’ll readily admit, but injectibility isn’t likely one of them, more on this in a bit) with an in-page GridView and show the output
  • How to hook it up to the Form web part using the standard SharePoint IWebPartRow interface.

First thing is first, to run a web part that hits a SQL server directly, you need Medium Trust, or to have modified your policy file to allow Sql Query permissions in your SharePoint environment.

Not far after first thing is the second thing, which I’ll discuss now (second.) I’m using the non-sharepoint style web part, otherwise known as ASP.NET web part (System.Web.UI.WebControls.WebParts.WebPart) as the base class for my part.

C#

    [Guid("7fd6fa72-9214-4cf0-b30b-ef7d931261cb")]
    public class SqlQueryWebPart : System.Web.UI.WebControls.WebParts.WebPart {

Why have I done this? Becaue it’s the new recommended best practice from the kids at Microsoft who built it. If you need SharePoint, you can get it with a wink, a nod, and the SharePoint Object Model.

I have the Query string, the connection stuff, the authentication type, and probably a few other doodads I forgot to mention as web part parameters. (If you look at my code, the Sorting stuff is all commented out cause I kept screwing it up, and I felt that I didn’t really want to mess with it anymore for the time being. The web part works without it.) The reason I’m bothering to show you on the page here is to point out a few things, the Attributes with which we adorn the properties are slightly different than those of the SharePoint web parts variety of them you might know from WSS V2.

Personalizable means that it will be serialized and stored as a parameter and re-populated after the part is instantiated.

WebBrowsable means that it will genearate an editor field in the Default Editor Part (These were formerly known as ToolParts in the old model,) when you go in to configure the web part.

WebDisplayName just means “This is what we’ll label it in the Default Editor Part”.

Category is short for “The name of the collapsable section under which it appears in the default Editor Part.”

C#

        [Personalizable(),
         WebBrowsable(true),
         WebDisplayName("Grid Lines"),
         Category("Query Details")]
        public GridLines GridLineConfig {
            get {
                return m_GridLines;
            }
            set {
                m_GridLines = value;
            }
        } private GridLines m_GridLines = GridLines.Both;

        
        [Personalizable(), 
         WebBrowsable(true),
         WebDisplayName("Server Name"),
         Category("Query Details")]
        public string ServerName {
            get { 
                return m_hostName; 
            }
            set { 
                m_hostName = value; 
            }
        } private string m_hostName = string.Empty;

        
        [Personalizable(), 
         WebBrowsable(true),
         WebDisplayName("Database Name"), 
         Category("Query Details")]
        public string DatabaseName {
            get { 
                return m_dbName; 
            }
            set { 
                m_dbName = value; 
            }
        } private string m_dbName = string.Empty;
        
        
        [DefaultValue(AuthType.SQL),
         Personalizable(), 
         WebBrowsable(true),
         WebDisplayName("Authentication"), 
         Category("Query Details")]
        public AuthType AuthentictionMethod {

            get { return m_AuthType; }
            set { m_AuthType = value; }

        } private AuthType m_AuthType = AuthType.Windows; 

        
        [Personalizable(),
         WebBrowsable(true),
         WebDisplayName("User Id"),
         Category("Query Details")]
        public string UserName {
            get { 
                return m_loginUser; 
            }
            set { 
                m_loginUser = value; 
            }
        } private string m_loginUser = string.Empty;

        

        [Personalizable(),
         WebBrowsable(true),
         WebDisplayName("Password"),
         Category("Query Details")]
        public string Password {

            get { return m_MaskedPassword; }

            set { m_MaskedPassword = value; }

        } private string m_MaskedPassword = string.Empty;

        
        [Personalizable(),
         WebBrowsable(false)]
        public string InnerPassword {

            get { return m_PrivatePassword; }
            set { m_PrivatePassword = value; }
        } private string m_PrivatePassword = string.Empty;

        
        [Personalizable(),
         WebBrowsable(true),
         WebDisplayName("Select Query"),
         Category("Query Details")]
        public string SQLQuery {
            get { 
                return m_QueryText; 
            }
            set { 
                m_QueryText = value; 
            }
        } private string m_QueryText = string.Empty;

        

        [Personalizable(),
         WebBrowsable(true),
         WebDisplayName("Page Size"),
         Category("Query Details")]
        public int PageSize {
            get { 
                return m_PageSize; 
            }

            set { 
                m_PageSize = value; 
            }
        } private int m_PageSize = 10;

Now, for the security caveats: I’ll tell you that it’s a little dangerous to put the Sql username and password in your web part instances configuration properties, cause if someone decides to export it, then you can see the username and password; but this does not apply if you use the Windows authentication mode. So, you figure out whether or not you can work around the issues.

As for the Parameterization:

I extract the expected parameters, as specified in the web part property, using a string-through, ie, I jump through the string one character at a time, looking for the parameters. This means I get them all in one pass, instead of a bunch of regexes or finds and splits, etc. They then go into a list, which is the list of expected ‘named paramters’ coming from the connected web parts. If I don’t get a match between the ones that are in your query and the ones connected to me, then I display a message.

C#

private void ExtractExpectedParameters(string p) {

            m_expectedParams = new List<string>();

            bool collecting = false;
            StringBuilder sb = new StringBuilder();
            for (int x = 0; x < p.Length; x++) {

                char c = p[x];
                
                if (collecting) {
                    bool dropOut = false;

                    if (x == (p.Length - 1)) {
                        sb.Append(c);
                        dropOut = true;
                    }

                    if (!char.IsLetterOrDigit(c) || dropOut) {
                        collecting = false;

                        // found a parameter name
                        m_expectedParams.Add(sb.ToString());
                        sb = new StringBuilder();
                    }
                    else {
                        sb.Append(c);
                    }
                }
                
                if (!collecting &&
                    c == '@') {
                    collecting = true;

                    m_HasParameters = true;
                }
            }
        }

So, after I set the fact that I have some parameters, I have to know when to read the values, and after much tribulation, and reference perusal, I have discovered that the time to suck the data from your producing source is OnPreRender:

C#

protected override void OnPreRender(EventArgs e) {

            // parse the SQL and rip out what we're looking for. Also set m_HasParameters
            ExtractExpectedParameters(this.SQLQuery);


            bool canQuery = false;

            if (m_HasParameters) {
                if (m_provider != null) {
                    m_provider.GetRowData(new RowCallback(GetRowData));
                }

                canQuery = false;
            }
            else
                canQuery = true;

            EnsureChildControls();

            

            m_CmdParameters = new Dictionary<string, object>();
            
            // we don't want to run any query unless we have all the right params and stuff.
            

            if ( m_HasParameters &&
                 m_provider != null) {
                PropertyDescriptorCollection props = m_provider.Schema;

                if (props != null &&
                    props.Count > 0 &&
                    props.Count == m_expectedParams.Count &&
                    m_tableData != null &&
                    m_tableData.Row != null &&
                    m_tableData.Row.ItemArray != null &&
                    m_tableData.Row.ItemArray.Length == m_expectedParams.Count) {
                    foreach (PropertyDescriptor prop in props) {
                        this.m_CmdParameters.Add(prop.Name, m_tableData.Row[prop.Name]);
                    }

                    canQuery = true;

                }
                else {

                    
                        registerError(string.Format("Supply required parameters before results can be displayed. Expecting: {0}.", string.Join(",", m_expectedParams.ToArray())));
                    

                }

            }
            else {

                if (m_HasParameters) {
                    registerError(string.Format("Based on the specified query, one or more parameter(s) are required. </br> Please connect this web part to a Form Web Part to obtain the required input parater(s): {0}.", string.Join(",", m_expectedParams.ToArray())));
                }
            }

            if (canQuery) {
                RetreiveData();
                RebindGrid();
            }

            
            base.OnPreRender(e);
        }
Note about catching exceptions: In this situation, I’ve decided to catch this error and display the output on the screen, so I don’t let it get through. This is with the intention of allowing one to not only know that something broke, but to help fix it without bringing down the entire process. If you are really concerned, you can do what I have done in the query results bit and only display the error text to admins, but I display the query parameter errors to everyone for the sake of helping the poor user who is hooking this up on their MySite (or wherever else some newbie to SharePoint, but not to SQL, might try it) trying to make sense of the idea of connected web parts. If you use my part, you can change that before deploying it. I’m fully aware that under most circumstances one does not want to catch and hold an exception. But in this case, I feel that it is warranted.




Finally, the coup de gras is to actually perform the parameterized query. No biggity now.

C#

private void RetreiveData() {

            SqlConnection oConn = null;
            if (m_MaskedPassword != passowrdMask) {
                InnerPassword = m_MaskedPassword;
                m_MaskedPassword = passowrdMask;
            }


            if (m_AuthType == AuthType.SQL) {
                m_connectionString = string.Format("Data Source={0};Initial Catalog={1};User Id={2};Password={3};Persist Security Info=false", m_hostName, m_dbName, m_loginUser, m_PrivatePassword);
            }
            else {
                m_connectionString = string.Format("Data Source={0};Initial Catalog={1};Integrated Security=SSPI", m_hostName, m_dbName);
            }
            
            try {
                oConn = new SqlConnection(m_connectionString);

                oConn.Open();

                using (SqlCommand cmd = oConn.CreateCommand()) {

                    cmd.CommandText = this.m_QueryText;
                    cmd.CommandType = CommandType.Text;

                    if (m_HasParameters) {
                        //do parameters!
                        foreach (String s in m_CmdParameters.Keys) {
                            if (!string.IsNullOrEmpty(m_CmdParameters[s].ToString())) {
                                cmd.Parameters.AddWithValue("@" + s, m_CmdParameters[s]);
                            }
                        }
                    }

                    if (!m_HasParameters ||
                          cmd.Parameters.Count == m_CmdParameters.Count) {
                        
                        SqlDataReader dr = cmd.ExecuteReader();
                        dt = new DataTable();

                        dt.Load(dr);

                    }
                    else {
                       
                        if (m_HasParameters) {
                            registerError(string.Format("Supply required parameters before results can be displayed. Expecting: {0}.", string.Join(",", m_expectedParams.ToArray())));
                        }

                    }
                }
            }
            catch (Exception ex) {

                // this is a Smart Error Handler, in that it shows you a generic message
                // if you're a Schmoe and a Detailed description if you're a site collection admin.


                SPUser usr = SPContext.GetContext(this.Context).Web.CurrentUser;

                if (usr.IsSiteAdmin) {
                    registerError(string.Format("{0}: {1} <p style=\"font-weight:bold;\">{2}</p><p>Connection String = {3}", ex.GetType().Name, ex.Message, ex.StackTrace, m_connectionString));
                }
                else {
                    registerError("Error loading SqlQuery Web Part. Please contact your administrator.");
                }

            }
            finally {
                if (oConn != null)
                    oConn.Dispose();
            }

        }

This is a rough model of the web part, and in need of a bit of refactoring to remove some duplication, but you get the idea. I hope.

Attached is the code files/project for VS 2008. Click to Download it. I’ve decided, after considering it carefully that I will let you have the wsp already built. No warranty. If you don’t understand how it works, you can ask, and I’ll try to answer. And NO, I won’t customize it for your needs, nor will I change it to do something else — with one exception: if you figure out a way to fix the paging junk then I’ll put your fix in and give you credit. (Yes I could do it eventually, but I don’t want to mess with it right now!) So, here have at it!

Late Breaking edit: The code provided here has been fixed now that I got my real version from the source control. I’ve corrected the problem with the ‘default values’.

(PS: Use WSSOnVista from BambooSolutions.com to install SharePoint on your development Vista host and run/test/debug without remote desktop etc.)